I was reminded just now that 30 years ago today, on 25th August 1994, this review article by myself and George Ellis was published in Nature (volume 370, pp. 609–615).
Sorry for the somewhat scrappy scanned copy. The article is still behind a paywall. No open access for the open Universe!
Can this really have been 30 years ago?
Anyway, that was the day I officially became labelled a “crank”, by some, although others thought we were pushing at an open door. We were arguing against the then-standard cosmological model (based on the Einstein – de Sitter model), but the weight of evidence was already starting to shift. Although we didn’t predict the arrival of dark energy, the arguments we presented about the density of matter did turn out to be correct. A lot has changed since 1994, but we continue to live in a Universe with a density of matter much lower than the critical density and our best estimate of what that density is was spot on.
Looking back on this, I think valuable lessons would be learned if someone had the time and energy to go through precisely why so many papers at that time were consistent with a higher-density Universe that we have now settled on. Confirmation bias undoubtedly played a role, and who is to say that it isn’t relevant to this day?
Here’s an interestingly different talk in the series of Cosmology Talks curated by Shaun Hotchkiss. The speaker, Sylvia Wenmackers, is a philosopher of science. According to the blurb on Youtube:
Her focus is probability and she has worked on a few theories that aim to extend and modify the standard axioms of probability in order to tackle paradoxes related to infinite spaces. In particular there is a paradox of the “infinite fair lottery” where within standard probability it seems impossible to write down a “fair” probability function on the integers. If you give the integers any non-zero probability, the total probability of all integers is unbounded, so the function is not normalisable. If you give the integers zero probability, the total probability of all integers is also zero. No other option seems viable for a fair distribution. This paradox arises in a number of places within cosmology, especially in the context of eternal inflation and a possible multiverse of big bangs bubbling off. If every bubble is to be treated fairly, and there will ultimately be an unbounded number of them, how do we assign probability? The proposed solutions involve hyper-real numbers, such as infinitesimals and infinities with different relative sizes, (reflecting how quickly things converge or diverge respectively). The multiverse has other problems, and other areas of cosmology where this issue arises also have their own problems (e.g. the initial conditions of inflation); however this could very well be part of the way towards fixing the cosmological multiverse.
The paper referred to in the presentation can be found here. There is a lot to digest in this thought-provoking talk, from the starting point on Kolmogorov’s axioms to the application to the multiverse, but this video gives me an excuse to repeat my thoughts on infinities in cosmology.
Most of us – whether scientists or not – have an uncomfortable time coping with the concept of infinity. Physicists have had a particularly difficult relationship with the notion of boundlessness, as various kinds of pesky infinities keep cropping up in calculations. In most cases this this symptomatic of deficiencies in the theoretical foundations of the subject. Think of the ‘ultraviolet catastrophe‘ of classical statistical mechanics, in which the electromagnetic radiation produced by a black body at a finite temperature is calculated to be infinitely intense at infinitely short wavelengths; this signalled the failure of classical statistical mechanics and ushered in the era of quantum mechanics about a hundred years ago. Quantum field theories have other forms of pathological behaviour, with mathematical components of the theory tending to run out of control to infinity unless they are healed using the technique of renormalization. The general theory of relativity predicts that singularities in which physical properties become infinite occur in the centre of black holes and in the Big Bang that kicked our Universe into existence. But even these are regarded as indications that we are missing a piece of the puzzle, rather than implying that somehow infinity is a part of nature itself.
The exception to this rule is the field of cosmology. Somehow it seems natural at least to consider the possibility that our cosmos might be infinite, either in extent or duration, or both, or perhaps even be a multiverse comprising an infinite collection of sub-universes. If the Universe is defined as everything that exists, why should it necessarily be finite? Why should there be some underlying principle that restricts it to a size our human brains can cope with?
On the other hand, there are cosmologists who won’t allow infinity into their view of the Universe. A prominent example is George Ellis, a strong critic of the multiverse idea in particular, who frequently quotes David Hilbert
The final result then is: nowhere is the infinite realized; it is neither present in nature nor admissible as a foundation in our rational thinking—a remarkable harmony between being and thought
But to every Hilbert there’s an equal and opposite Leibniz
I am so in favor of the actual infinite that instead of admitting that Nature abhors it, as is commonly said, I hold that Nature makes frequent use of it everywhere, in order to show more effectively the perfections of its Author.
You see that it’s an argument with quite a long pedigree!
Many years ago I attended a lecture by Alex Vilenkin, entitled The Principle of Mediocrity. This was a talk based on some ideas from his book Many Worlds in One: The Search for Other Universes, in which he discusses some of the consequences of the so-called eternal inflation scenario, which leads to a variation of the multiverse idea in which the universe comprises an infinite collection of causally-disconnected “bubbles” with different laws of low-energy physics applying in each. Indeed, in Vilenkin’s vision, all possible configurations of all possible things are realised somewhere in this ensemble of mini-universes.
One of the features of this scenario is that it brings the anthropic principle into play as a potential “explanation” for the apparent fine-tuning of our Universe that enables life to be sustained within it. We can only live in a domain wherein the laws of physics are compatible with life so it should be no surprise that’s what we find. There is an infinity of dead universes, but we don’t live there.
I’m not going to go on about the anthropic principle here, although it’s a subject that’s quite fun to write or, better still, give a talk about, especially if you enjoy winding people up! What I did want to say mention, though, is that Vilenkin correctly pointed out that three ingredients are needed to make this work:
An infinite ensemble of realizations
A discretizer
A randomizer
Item 2 involves some sort of principle that ensures that the number of possible states of the system we’re talking about is not infinite. A very simple example from quantum physics might be the two spin states of an electron, up (↑) or down(↓). No “in-between” states are allowed, according to our tried-and-tested theories of quantum physics, so the state space is discrete. In the more general context required for cosmology, the states are the allowed “laws of physics” ( i.e. possible false vacuum configurations). The space of possible states is very much larger here, of course, and the theory that makes it discrete much less secure. In string theory, the number of false vacua is estimated at 10500. That’s certainly a very big number, but it’s not infinite so will do the job needed.
Item 3 requires a process that realizes every possible configuration across the ensemble in a “random” fashion. The word “random” is a bit problematic for me because I don’t really know what it’s supposed to mean. It’s a word that far too many scientists are content to hide behind, in my opinion. In this context, however, “random” really means that the assigning of states to elements in the ensemble must be ergodic, meaning that it must visit the entire state space with some probability. This is the kind of process that’s needed if an infinite collection of monkeys is indeed to type the (large but finite) complete works of shakespeare. It’s not enough that there be an infinite number and that the works of shakespeare be finite. The process of typing must also be ergodic.
Now it’s by no means obvious that monkeys would type ergodically. If, for example, they always hit two adjoining keys at the same time then the process would not be ergodic. Likewise it is by no means clear to me that the process of realizing the ensemble is ergodic. In fact I’m not even sure that there’s any process at all that “realizes” the string landscape. There’s a long and dangerous road from the (hypothetical) ensembles that exist even in standard quantum field theory to an actually existing “random” collection of observed things…
More generally, the mere fact that a mathematical solution of an equation can be derived does not mean that that equation describes anything that actually exists in nature. In this respect I agree with Alfred North Whitehead:
There is no more common error than to assume that, because prolonged and accurate mathematical calculations have been made, the application of the result to some fact of nature is absolutely certain.
It’s a quote I think some string theorists might benefit from reading!
Items 1, 2 and 3 are all needed to ensure that each particular configuration of the system is actually realized in nature. If we had an infinite number of realizations but with either infinite number of possible configurations or a non-ergodic selection mechanism then there’s no guarantee each possibility would actually happen. The success of this explanation consequently rests on quite stringent assumptions.
I’m a sceptic about this whole scheme for many reasons. First, I’m uncomfortable with infinity – that’s what you get for working with George Ellis, I guess. Second, and more importantly, I don’t understand string theory and am in any case unsure of the ontological status of the string landscape. Finally, although a large number of prominent cosmologists have waved their hands with commendable vigour, I have never seen anything even approaching a rigorous proof that eternal inflation does lead to realized infinity of false vacua. If such a thing exists, I’d really like to hear about it!
Some years ago – actually about 30! – I wrote a book with George Ellis about the density of matter in the Universe. Many of the details in that book are of course out of date now but the main conclusions still stand. We started the book with a general discussion of cosmological models which I think also remains relevant today so I thought I’d do a quick recap here.
Anyone who takes even a passing interest in cosmology will know that it’s a field that’s not short of controversy, sometimes reinforced by a considerable level of dogmatism in opposing camps. In understanding why this is the case, it is perhaps helpful to note that much of the problem stems from philosophical disagreements about which are the appropriate criteria for choosing a “good” (or at least acceptable) theory of cosmology. Different approaches to cosmology develop theories aimed at satisfying different criteria, and preferences for the different approaches to a large extent reflect these different initial goals. It would help to clarify this situation if one could make explicit the issues relating to choices of this kind, and separate them from the more `physical’ issues that concern the interpretation of data.
The following philosophical diversion was intended to initiate a debate within the cosmological community. Some cosmologists in effect claim that there is no philosophical content in their work and that philosophy is an irrelevant and unnecessary distraction from their work as scientists. I would contend that they are, whether they like it or not, making philosophical (and, in many cases, metaphysical) assumptions, and it is better to have these out in the open than hidden.
To provide a starting point for, consider the following criteria, which might be applied in the wider context for scientific theories in general, encapsulating the essentials of this issue:
One can imagine a kind of rating system which judges cosmological models against each of these criteria. The point is that cosmologists from different backgrounds implicitly assign a different weighting to each of them, and therefore end up trying to achieve different goals to others. There is a possibility of both positive and negative ratings in each of these areas.
Note that such categories as “importance”, “intrinsic interest” and “plausibility” are not included. Insofar as they have any meaning apart from personal prejudice, they should be reflected in the categories above, and could perhaps be defined as aggregate estimates following on from the proposed categories.
Category 1(c) (“beauty”) is difficult to define objectively but nevertheless is quite widely used, and seems independent of the others; it is the one that is most problematic . Compare, for example, the apparently “beautiful” circular orbit model of the Solar System with the apparently ugly elliptic orbits found by Kepler. Only after Newton introduced his theory of gravitation did it become clear that beauty in this situation resided in the inverse-square law itself, rather than in the outcomes of that law. Some might therefore wish to omit this category.
One might think that category 1(a) (“logical consistency'”) would be mandatory, but this is not so, basically because we do not yet have a consistent Theory of Everything.
Again one might think that negative scores in 4(b) (`confirmation’) would disqualify a theory but, again, that is not necessarily so, because measurement processes, may involve systematic errors and observational results are all to some extent uncertain due to statistical limitations. Confirmation can therefore be queried. A theory might also be testable [4(a)] in principle, but perhaps not in practice at a given time because the technology may not exist to perform the necessary experiment or observation.
The idea is that even when there is disagreement about the relative merits of different models or theories, there is a possibility of agreement on the degree to which the different approaches could and do meet these various criteria. Thus one can explore the degree to which each of these criteria is met by a particular cosmological model or approach to cosmology. We suggest that one can distinguish five broadly different approaches to cosmology, roughly corresponding to major developments at different historical epochs:
These approaches are not completely independent of each other, but any particular model will tend to focus more on one or other aspect and may even completely leave out others. Comparing them with the criteria above, one ends up with a star rating system something like that shown in the Table, in which George and I applied a fairly arbitrary scale to the assignment of the ratings!
To a large extent you can take your pick as to the weights you assign to each of these criteria, but my underlying views is that without a solid basis of experimental support [4(b)], or at least the possibility of confirmation [4(a)], a proposed theory is not a ‘good’ one from a scientific point of view. If one can say what one likes and cannot be proved wrong, one is free from the normal constraints of scientific discipline. This contrasts with a major thrust in modern cosmological thinking which emphasizes criteria [2] and [3] at the expense of [4].
Today, 16th February 2023, sees the official publication of a special 50th anniversary edition classic monograph on the large scale structure of space-time by Stephen Hawking and George Ellis. My copy of a standard issue of the book is on the left; the special new edition is on the right. The book has been reprinted many times, which testifies to its status as an authoritative treatise. I don’t have the new edition, actually. I just stole the picture from the Facebook page of George Ellis, with whom I have collaborated on a book (though not one as significant as the one shown above).
This book is by no means an introductory text but is full of interesting insights for people who have studied general relativity before. Stephen Hawking left us some years ago, of course, but George is still going strong so let me take this opportunity to congratulate him on the publication of this special anniversary edition!
P.S It struck me while writing this post that I’ve been working as a cosmologist in various universities for getting on for about 35 years and I’ve never taught a course on general relativity. As I’ll be retiring pretty soon it’s looking very likely that I never will…
So, back to Brighton and a sweltering office on Sussex University Campus. I made it back to pick up the list of names I’ll be reading out at tomorrow afternoon’s graduation ceremony in time to give me a few hours’ practice tonight. On the train back from Cardiff I remembered a discussion I had at the conference last week, especially about the various views about cosmology, especially the idea that we might live in a multiverse. I did a bit of a dig around and found this nice video of esteemed cosmologist (and erstwhile co-author of mine) George Ellis talking about this, and also about his favourite kind of universe (namely one with a compact topology).
At the end of the 2015 Rugby World Cup, I wrote a post recalling the World Cup of 1995, which was held in South Africa while I was visiting there. I had the privilege of seeing the great Jonah Lomu demolishing the England defence that day. Today I learned with greant sadness that he has passed away, aged just 40. Since Jonah Lomu played such a central role in one of the most amazing sporting experiences of my life, which lives in my memory as if it happened yesterday, I wanted to take the opportunity to pay tribute to the awesome sportsman that he was by sharing that memory again.
In 1995 was visiting George Ellis at the University of Cape Town to work on a book, which was published in 1997. The book is now rather out of date, but I think it turned out rather well and it was certainly a lot of fun working on it. Of course it was a complete coincidence that I timed my trip to Cape Town exactly to cover the period of the Rugby Word Cup. Well, perhaps not a complete coincidence. In fact I was lucky enough to get a ticket for the semi-final of that tournament between England and New Zealand at Newlands, in Cape Town. I was in the stand at one end of the ground, and saw New Zealand – spearheaded by the incredible Jonah Lomu – score try after try in the distance at the far end during the first half. Here is the first, very soon after the kickoff when Andrew Mehrtens wrong-footed England by kicking to the other side of the field than where the forwards were lined up. The scrambling defence conceded a scrum which led to a ruck, from which this happened:
Jonah Lomu was unstoppable that day. One of the All Blacks later quipped that “Rugby is a team game. Fourteen players all know that their job is to give the ball to Jonah”.
It was one-way traffic in the first half but England played much better in the second, with the result that all the action was again at the far end of the pitch. However, right at the end of the match Jonah Lomu scored another try, this time at the end I was standing. I’ll never forget the sight of that enormous man sprinting towards me and am glad it wasn’t my job to try to stop him, especially have seen what happened to Underwood, Catt and Carling when they tried to bring him down. Lomu scored four tries in that game, in one of the most memorable performances by any sportsman in any sport. It’s so sad that he has gone. It’s especially hard to believe that such a phenomenal athlete could be taken at such a young age. My thoughts are with his family and friends.
So, the 2015 Rugby World Cup final takes place this weekend. It’s been an interesting tournament with some memorable games (and some notable disappointments). Anyway, I suddenly remembered that in 1995 I was in South Africa during the Rugby World Cup. In fact I was visiting George Ellis at the University of Cape Town to work on a book, which was published in 1997. The book is now rather out of date, but I think it turned out rather well and it was certainly a lot of fun working on it!
Was that really twenty years ago?
Of course it was a complete coincidence that I timed my trip to Cape Town exactly to cover the period of the Rugby Word Cup. Well, perhaps not a complete coincidence. In fact I was lucky enough to get a ticket for the semi-final of that tournament between England and New Zealand at Newlands, in Cape Town. I was in the stand at one end of the ground, and saw New Zealand – spearheaded by the incredible Jonah Lomu – score try after try in the distance at the far end during the first half. Here is the first, very soon after the kickoff when Andrew Mehrtens wrong-footed England by kicking to the other side of the field than where the forwards were lined up. The scrambling defence conceded a scrum which led to a ruck, from which this happened:
Even more impressively I had a very good view when Zinzan Brooke scored at the same end with a drop-goal off the back of a scrum. Not many No. 8 forwards have the skill to do that!
It was one-way traffic in the first half but in the second half England played much better, with the result that all the action was again at the far end of the pitch. However, right at the end of the match Jonah Lomu scored another try, this time at the end I was standing. I’ll never forget the sight of that enormous man sprinting towards me and am glad it wasn’t my job to try to stop him, especially have seen what happened to Underwood, Catt and Carling when they tried to bring him down.
Anyway, I hope it’s a good final on Saturday. For what it’s worth, I did pick the two finalists correctly before the tournament. I’m expecting the All Blacks to beat Australia comfortably, but am not going to bet on the result!
I was having a chat over coffee yesterday with some members of the Mathematics Department here at the University of Cape Town, one of whom happens to be an expert at Bridge, actually representing South Africa in international competitions. That’s a much higher level than I could ever aspire to so I was a bit nervous about mentioning my interest in the game, but in the end I explained that I have in the past used Bridge (and other card games) to describe how Bayesian probability works; see this rather lengthy post for more details. The point is that as cards are played, one’s calculation of the probabilities of where the important cards lie changes in the light of information revealed. It makes much more sense to play Bridge according to a Bayesian interpretation, in which probability represents one’s state of knowledge, rather than what would happen over an ensemble of “random” realisations.
This particular topic – and Bayesian inference in general – is also discussed in my book From Cosmos to Chaos (which is, incidentally, now available in paperback). On my arrival in Cape Town I gave a copy of this book to my genial host, George Ellis, and our discussion of Bridge prompted him to say that he thought I had missed a trick in the book by not mentioning the connections between Bayesian probability and neuroscience. I hadn’t written about this because I didn’t know anything about it, so George happily enlightened me by sending a few review articles, such as this:
I can’t post it all, for fear of copyright infringement, but you get the idea. Here’s another one:
A neurocentric approach to Bayesian inference Christopher D. Fiorillo
Abstract A primary function of the brain is to infer the state of the world in order to determine which motor behaviours will best promote adaptive fitness. Bayesian probability theory formally describes how rational inferences ought to be made, and it has been used with great success in recent years to explain a range of perceptual and sensorimotor phenomena.
As a non-expert in neuroscience, I find these very interesting. I’ve long been convinced that from the point of view of formal reasoning, the Bayesian approach to probability is the only way that makes sense, but until reading these I’ve not been aware that there was serious work being done on the possibility that it also describes how the brain works in situations where there is insufficient information to be sure what is the correct approach. Except, of course, for players of Bridge who know it very well.
There’s just a chance that I may have readers out there who know more about this Bayes-Brain connection. If so, please enlighten me further through the comments box!
After one of my lectures a few weeks ago, a student came up to me and asked whether I had an Erdős numberand, if so, what it was. I didn’t actually knowwhat he was talking about but was yesterday reminded of it, so tried to find out.
In case you didn’t know, Paul Erdős (who died in 1996) was an eccentric Hungarian mathematician who wrote more than 1000 mathematical papers during his life but never settled in one place for any length of time. He travelled between colleagues and conference, mostly living out of a suitcase, and showed no interest at all in property or possessions. His story is a fascinating one, and his contributions to mathematics were immense and wide-ranging. The Erdős number is a tiny part of his legacy, but one that seems to have taken hold. Some mathematicians appear to take it very seriously, but most treat it with tongue firmly in cheek, as I certainly do.
So what is the Erdős number?
It’s actually quite simple to define. First, Erdős himself is assigned an Erdős number of zero. Anyone who co-authored a paper with Erdős has an Erdős number of 1. Then anyone who wrote a paper with someone who wrote a paper with Erdős has an Erdős number of 2, and so on. The Erdős number is thus a measure of “collaborative distance”, with lower numbers representing closer connections.
I say it’s quite easy to define, but it’s rather harder to calculate. Or it would be were it not for modern bibliographic databases. In fact there’s a website run by the American Mathematical Society which allows you to calculate your Erdős number as well as a similar measure of collaborative distance with respect to any other mathematician.
A list of individuals with very low Erdős numbers (1, 2 or 3) can be found here.
Given that Erdős was basically a pure mathematician, I didn’t expect first to show up as having any Erdős number at all, since I’m not really a mathematician and I’m certainly not very pure. However, his influence is clearly felt very strongly in physics and a surprisingly large number of physicists (and astronomers) have a surprisingly small Erdős number. According to the AMS website, mine is 5 – much lower than I would have expected. The path from me to Erdős in this case goes through G.F.R. Ellis, a renowned expert in the mathematics of general relativity (as well as a ridiculous number of other things!). I wrote a paper and a book with George Ellis some time ago.
However, looking at the list I realise that I have another route to Erdős, through the great Russian mathematician Vladimir Arnold, who has an Erdős number of 3. Arnold wrote a paper with Sergei Shandarin with whom I wrote a paper some time ago. That gives me another route to an Erdős number of 5, but I can’t find any paths shorter than that.
I guess many researchers will have links through their PhD supervisors, so I checked mine – John D. Barrow. It turns out he also has an Erdős number of 5 so a path through him doesn’t lower my number.
I used to work in the School of Mathematical Sciences at Queen Mary, University of London, and it is there that I found some people I know well who have lower Erdős numbers than me. Reza Tavakol, for example, has an Erdős number of 3 but although I’ve known him for 20 years, we’ve never written a paper together. If we did, I could reduce my Erdős number by one. You never know….
This means that anyone I’ve ever written a paper with has an Erdős number no greater than 6. I doubt if it’s very important, but it definitely qualifies as Quite Interesting.
Sean Carroll, blogger-in-chief at Cosmic Variance, has ventured abroad from his palatial Californian residence and is currently slumming it in a little town called Oxford where he is attending a small conference in celebration of the 70th birthday of George Ellis. In fact he’s been posting regular live commentaries on the proceedings which I’ve been following with great interest. It looks an interesting and unusual meeting because it involves both physicists and philosophers and it is based around a series of debates on topics of current interest. See Sean’s posts here, here and here for expert summaries of the three days of the meeting.
Today’s dispatches included an account of George’s own talk which appears to have involved delivering a polemic against the multiverse, something he has been known to do from time to time. I posted something on it myself, in fact. I don’t think I’m as fundamentally opposed as Geroge to the idea that we might live in a bit of space-time that may belong to some sort of larger collection in which other bits have different properties, but it does bother me how many physicists talk about the multiverse as if it were an established fact. There certainly isn’t any observational evidence that this is true and the theoretical arguments usually advanced are far from rigorous.The multiverse certainly is a fun thing to think about, I just don’t think it’s really needed.
There is one red herring that regularly floats into arguments about the multiverse, and that concerns testability. Different bits of the multiverse can’t be observed directly by an observer in a particular place, so it is often said that the idea isn’t testable. I don’t think that’s the right way to look at it. If there is a compelling physical theory that can account convincingly for a realised multiverse then that theory really should have other necessary consequences that are testable, otherwise there’s no point. Test the theory in some other way and you test whether the multiverse emanating from it is sound too.
However, that fairly obvious statement isn’t really the point of this piece. As I was reading Sean’s blog post for today you could have knocked me down with a feather when I saw my name crop up:
Orthodoxy is based on the beliefs held by elites. Consider the story of Peter Coles, who tried to claim back in the 1990’s that the matter density was only 30% of the critical density. He was threatened by a cosmological bigwig, who told him he’d be regarded as a crank if he kept it up. On a related note, we have to admit that even scientists base beliefs on philosophical agendas and rationalize after the fact. That’s often what’s going on when scientists invoke “beauty” as a criterion.
George was actually talking about a paper we co-wrote for Nature in which we went through the different arguments that had been used to estimate the average density of matter in the Universe, tried to weigh up which were the more reliable, and came to the conclusion that the answer was in the range 20 to 40 percent of the critical density. There was a considerable theoretical prejudice at the time, especially from adherents of inflation, that the density should be very close to the critical value, so we were running against the crowd to some extent. I remember we got quite a lot of press coverage at the time and I was invited to go on Radio 4 to talk about it, so it was an interesting period for me. Working with George was a tremendous experience too.
I won’t name the “bigwig” George referred to, although I will say it was a theorist; it’s more fun for those working in the field to guess for themselves! Opinions among other astronomers and physicists were divided. One prominent observational cosmologist was furious that we had criticized his work (which had yielded a high value of the density). On the other hand, Martin Rees (now “Lord” but then just plain “Sir”) said that he thought we were pushing at an open door and was surprised at the fuss.
Later on, in 1996, we expanded the article into a book in which we covered the ground more deeply but came to the same conclusion as before. The book and the article it was based on are now both very dated because of the huge advances in observational cosmology over the last decade. However, the intervening years have shown that we were right in our assessment: the standard cosmology has about 30% of the critical density.
Of course there was one major thing we didn’t anticipate which was the discovery in the late 1990s of dark energy which, to be fair, had been suggested by others more prescient than us as early as 1990. You can’t win ’em all.
So that’s the story of my emergence as a crank, a title to which I’ve tried my utmost to do justice since then. Actually, I would have liked to have had the chance to go to George’s meeting in Oxford, primarily to greet my ertswhile collaborator whom I haven’t seen for ages. But it was invitation-only. I can’t work out whether these days I’m too cranky or not cranky enough to get to go to such things. Looking at the reports of the talks, I rather think it could be the latter.
Now, anyone care to risk the libel laws and guess who Professor BigWig was?
The views presented here are personal and not necessarily those of my employer (or anyone else for that matter).
Feel free to comment on any of the posts on this blog but comments may be moderated; anonymous comments and any considered by me to be vexatious and/or abusive and/or defamatory will not be accepted. I do not necessarily endorse, support, sanction, encourage, verify or agree with the opinions or statements of any information or other content in the comments on this site and do not in any way guarantee their accuracy or reliability.