Archive for Cosmology

A Non-accelerating Universe?

Posted in Astrohype, The Universe and Stuff with tags , , , , , on October 26, 2016 by telescoper

There’s been quite a lot of reaction on the interwebs over the last few days much of it very misleading; here’s a sensible account) to a paper by Nielsen, Guffanti and Sarkar which has just been published online in Scientific Reports, an offshoot of Nature. I think the above link should take you an “open access” version of the paper but if it doesn’t you can find the arXiv version here. I haven’t cross-checked the two versions so the arXiv one may differ slightly.

Anyway, here is the abstract:

The ‘standard’ model of cosmology is founded on the basis that the expansion rate of the universe is accelerating at present — as was inferred originally from the Hubble diagram of Type Ia supernovae. There exists now a much bigger database of supernovae so we can perform rigorous statistical tests to check whether these ‘standardisable candles’ indeed indicate cosmic acceleration. Taking account of the empirical procedure by which corrections are made to their absolute magnitudes to allow for the varying shape of the light curve and extinction by dust, we find, rather surprisingly, that the data are still quite consistent with a constant rate of expansion.

Obviously I haven’t been able to repeat the statistical analysis but I’ve skimmed over what they’ve done and as far as I can tell it looks a fairly sensible piece of work (although it is a frequentist analysis). Here is the telling plot (from the Nature version)  in terms of the dark energy (y-axis) and matter (x-axis) density parameters:

lambda

Models shown in this plane by a line have the correct balance between Ωm, and ΩΛ to cancel out the decelerating effect of the former against the accelerating effect of the latter (a special case is the origin on the plot, which is called the Milne model and represents an entirely empty universe). The contours show “1, 2 and 3σ” contours, regarding all other parameters as nuisance parameters. It is true that the line of no acceleration does go inside the 3σcontour so in that sense is not entirely inconsistent with the data. On the other hand, the “best fit” (which is at the point Ωm=0.341, ΩΛ=0.569) does represent an accelerating universe.

I am not all that surprised by this result, actually. I’ve always felt that taken on its own the evidence for cosmic acceleration from supernovae alone was not compelling. However, when it is combined with other measurements (particularly of the cosmic microwave background and large-scale structure) which are sensitive to other aspects of the cosmological space-time geometry, the agreement is extremely convincing and has established a standard “concordance” cosmology. The CMB, for example, is particularly sensitive to spatial curvature which, measurements tells us, must be close to zero. The Milne model, on the other hand, has a large (negative) spatial curvature entirely excluded by CMB observations. Curvature is regarded as a “nuisance parameter” in the above diagram.

I think this paper is a worthwhile exercise. Subir Sarkar (one of the authors) in particular has devoted a lot of energy to questioning the standard ΛCDM model which far too many others accept unquestioningly. That’s a noble thing to do, and it is an essential part of the scientific method, but this paper only looks at one part of an interlocking picture. The strongest evidence comes from the cosmic microwave background and despite this reanalysis I feel the supernovae measurements still provide a powerful corroboration of the standard cosmology.

Let me add, however, that the supernovae measurements do not directly measure cosmic acceleration. If one tries to account for them with a model based on Einstein’s general relativity and the assumption that the Universe is on large-scales is homogeneous and isotropic and with certain kinds of matter and energy then the observations do imply a universe that accelerates. Any or all of those assumptions may be violated (though some possibilities are quite heavily constrained). In short we could, at least in principle, simply be interpreting these measurements within the wrong framework, and statistics can’t help us with that!

KiDS-450: Testing extensions to the standard cosmological model [CEA]

Posted in The Universe and Stuff with tags , , , on October 19, 2016 by telescoper

Since I’ve just attended a seminar in Cardiff by Catherine Heymans on exactly this work, I couldn’t resist reblogging the arXiver entry for this paper which appeared on arXiv a couple of days ago.

The key finding is that the weak lensing analysis of KIDS data (which is mainly to the distribution of matter at low redshift) does seem to be discrepant with the predictions of the standard cosmological model established by Planck (which is sensitive mainly to high-redshift fluctuations).

Could this discrepancy be interpreted as evidence of something going on beyond the standard cosmology? Read the paper to explore some possibilities!

arxiver's avatararXiver

http://arxiv.org/abs/1610.04606

We test extensions to the standard cosmological model with weak gravitational lensing tomography using 450 deg$^2$ of imaging data from the Kilo Degree Survey (KiDS). In these extended cosmologies, which include massive neutrinos, nonzero curvature, evolving dark energy, modified gravity, and running of the scalar spectral index, we also examine the discordance between KiDS and cosmic microwave background measurements from Planck. The discordance between the two datasets is largely unaffected by a more conservative treatment of the lensing systematics and the removal of angular scales most sensitive to nonlinear physics. The only extended cosmology that simultaneously alleviates the discordance with Planck and is at least moderately favored by the data includes evolving dark energy with a time-dependent equation of state (in the form of the $w_0-w_a$ parameterization). In this model, the respective $S_8 = sigma_8 sqrt{Omega_{rm m}/0.3}$ constraints agree at the $1sigma$ level, and there is `substantial concordance’ between…

View original post 159 more words

A Universe of Two Trillion Galaxies

Posted in The Universe and Stuff with tags , , , , , on October 13, 2016 by telescoper

I just saw a press-release that describes a paper, just out, authored by Chris Conselice et al from the University of Nottingham (in the Midlands), with this here abstract:

conselice

The key conclusion of this paper is that when the universe was only a few billion years old there were about ten times as many galaxies in a given volume of space as there are within a similar volume today, but most of these galaxies were much lower mass systems than, e.g., the Milky Way. In fact their masses are similar to those of the satellite galaxies surrounding the Milky Way. These objects are numerous but so faint that even in very deep surveys with very big telescopes they are very easy to miss.

Here’s an image from a deep survey: this is from the Hubble Space Telescoper Great Observatories Deep Survey (HST-GOODS).

hst_goods-south

You can click on this to make it larger if you wish. This is typical of a “pencil beam” survey. It opens a very small window on the heavens – about a millionth of its total area of the sky – in a direction chosen to avoid having too many bright stars from our own Galaxy getting in the way. When you look at such a patch with a big telescope for a long time, what you see is basically all galaxies. The few stars in the above image can be identified by the diffraction patterns they produce, but almost every fuzzy blob in the picture is a galaxy. It looks like there are a lot of galaxies in this image, but the real number seems to be substantially higher than we thought.

When I’ve given popular talks about this kind of thing I’ve always said something like “There are at least as many galaxies in the observable Universe as there are stars in our own Galaxy”. It turns out that I was wise to include the “at least as”. There are about 100 billion (1011) stars in the Milky Way, but the latest estimate is now that there are two trillion (2 ×1012) galaxies in the observable Universe. I quote Douglas Adams:

“The Universe, as has been observed before, is an unsettlingly big place, a fact which for the sake of a quiet life most people tend to ignore. Many would happily move to somewhere rather smaller of their own devising, and this is what most beings in fact do.

I believe this explains a lot about modern politics.

 

General Relativity and Cosmology: Unsolved Questions and Future Directions [CL]

Posted in The Universe and Stuff with tags , on October 12, 2016 by telescoper

I missed this when it appeared on the arXiv last week, but now that I’ve read it I couldn’t resist reblogging this nice review of the current state of General Relativity and its alternatives, with an emphasis on the cosmological ramifications.

arxiver's avatararXiver

http://arxiv.org/abs/1609.09781

For the last 100 years, General Relativity (GR) has taken over the gravitational theory mantle held by Newtonian Gravity for the previous 200 years. This article reviews the status of GR in terms of its self-consistency, completeness, and the evidence provided by observations, which have allowed GR to remain the champion of gravitational theories against several other classes of competing theories. We pay particular attention to the role of GR and gravity in cosmology, one of the areas in which one gravity dominates and new phenomena and effects challenge the orthodoxy. We also review other areas where there are likely conflicts pointing to the need to replace or revise GR to represent correctly observations and consistent theoretical framework. Observations have long been key both to the theoretical liveliness and viability of GR. We conclude with a discussion of the likely developments over the next 100 years.

Read this paper…

View original post 17 more words

Cosmology: Galileo to Gravitational Waves – with Hiranya Peiris

Posted in The Universe and Stuff with tags , , , on September 9, 2016 by telescoper

Here’s another thing I was planning to post earlier in the summer, but for some reason forgot. It’s a video of a talk given at the Royal Institution earlier this year by eminent cosmologist Prof. Hiranya Peiris of University College London. The introduction to the talk goes like this:

Modern fundamental physics contains ideas just as revolutionary as those of Copernicus or Newton; ideas that may radically change our understanding of the world; ideas such as extra dimensions of space, or the possible existence of other universes.

Testing these concepts requires enormous energies, far higher than what is achievable by the Large Hadron Collider at CERN, and in fact, beyond any conceivable Earth-bound experiments. However, at the Big Bang, the Universe itself performed the ultimate experiment and left clues and evidence about what was behind the origin of the cosmos as we know it, and how it is evolving. And the biggest clue is the afterglow of the Big Bang itself.

In the past decade we have been able to answer age-old questions accurately, such as how old the Universe is, what it contains, and its destiny. Along with these answers have also come many exciting new questions. Join Hiranya Peiris to unravel the detective story, explaining what we have uncovered, and how we know what we know.

Hiranya Peiris is Professor of Astrophysics in the Astrophysics Group in the Department of Physics and Astronomy at University College London. She is also the Principal Investigator of the CosmicDawn project, funded by the European Research Council

She is also a member of the Planck Collaboration and of the ongoing Dark Energy Survey, the Dark Energy Spectroscopic Instrument and the Large Synoptic Survey Telescope. Her work both delves into the Cosmic Microwave Background and contributes towards the next generation galaxy surveys that will yield deep insights into the evolution of the Universe.

I’ve heard a lot of people talk about “Cosmic Dawn” but I’ve never met her…

Anyway, here is the video. It’s quite long (almost an hour) but very interesting and well-presented for experts and non-experts alike!

Update: I’ve just heard the news that Hiranya is shortly to take up a new job in Sweden as Director of the Oscar Klein Centre for Cosmoparticle Physics. Hearty congratulations and good luck to her!

 

George Ellis – Are there multiple universes?

Posted in The Universe and Stuff with tags , , , on July 18, 2016 by telescoper

So, back to Brighton and a sweltering office on Sussex University Campus. I made it back to pick up the list of names I’ll be reading out at tomorrow afternoon’s graduation ceremony in time to give me a few hours’ practice tonight. On the train back from Cardiff I remembered a discussion I had at the conference last week, especially about the various views about cosmology, especially the idea that we might live in a multiverse. I did a bit of a dig around and found this nice video of esteemed cosmologist  (and erstwhile co-author of mine) George Ellis talking about this, and also about his favourite kind of universe (namely one with a compact topology).

 

Cosmology: A Bayesian Perspective

Posted in Talks and Reviews, The Universe and Stuff with tags , , on July 14, 2016 by telescoper

For those of you who are interested, here are the slides I used in my invited talk at MaxEnt 2016 Maximum Entropy and Bayesian Methods in Science and Engineering, yesterday (13th July 2016) in Ghent (Belgium).

MaxEnt 2016: Norton’s Dome and the Cosmological Density Parameter

Posted in The Universe and Stuff with tags , , , on July 11, 2016 by telescoper

The second in my sequence of posts tangentially related to talks at this meeting on Maximum Entropy and Bayesian Methods in Science and Engineering is inspired by a presentation this morning by Sylvia Wenmackers. The talk featured an example which was quite new to me called Norton’s Dome. There’s a full discussion of the implications of this example at John D. Norton’s own website, from which I have taken the following picture:

dome_with_eqn

This is basically a problem in Newtonian mechanics, in which a particle rolls down from the apex of a dome with a particular shape in response to a vertical gravitational field. The solution is well-determined and shown in the diagram.

An issue arises, however, when you consider the case where the particle starts at the apex of the dome with zero velocity. One solution in this case is that the particle stays put forever. However it can be shown that there are other solutions in which the particle sits at the top for an arbitrary (finite) time before rolling down. An example could be for example if the particle were launched up the dome from some point with just enough kinetic energy to reach the top where it is momentarily at rest, but then rolls down again.

Norton argues that this problem demonstrates a certain kind of indeterminism in Newtonian Mechanics. The mathematical problem with the specified initial conditions clearly has a solution in which the ball stays at the top forever. This solution is unstable, which is a familiar situation in mechanics, but this equilibrium has an unusual property related to the absence of Lipschitz continuity. One might expect that an infinitesimal asymmetric perturbation of the particle or the shape of the surface would be needed to send the particle rolling down the slope, but in this case it doesn’t. This is because there isn’t just one solution that has zero velocity at the equilibrium, but an entirely family as described above. This is both curious and interesting, and it does raise the question of how to define a probability measure that describes these solutions.

I don’t really want to go into the philosophical implications of this cute example, but it did strike me that there’s a similarity with an interesting issue in cosmology that I’ve blogged about before (in different terms).

This probably seems to have very little to do with physical cosmology, but now forget about domes and think instead about the behaviour of the mathematical models that describe the Big Bang. To keep things simple, I’m going to ignore the cosmological constant and just consider how things depend on one parameter, the density parameter Ω0. This is basically the ratio between the present density of the matter in the Universe compared to what it would have to be to cause the expansion of the Universe eventually to halt. To put it a slightly different way, it measures the total energy of the Universe. If Ω0>1 then the total energy of the Universe is negative: its (negative) gravitational potential energy dominates over the (positive) kinetic energy. If Ω0<1 then the total energy is positive: kinetic trumps potential. If Ω0=1 exactly then the Universe has zero total energy: energy is precisely balanced, like the man on the tightrope.

A key point, however, is that the trade-off between positive and negative energy contributions changes with time. The result of this is that Ω is not fixed at the same value forever, but changes with cosmic epoch; we use Ω0 to denote the value that it takes now, at cosmic time t0, but it changes with time.

At the beginning, i.e. at the Big Bang itself,  all the Friedmann models begin with Ω arbitrarily close to unity at arbitrarily early times, i.e. the limit as t tends to zero is Ω=1.

In the case in which the Universe emerges from the Big bang with a value of Ω just a tiny bit greater than one then it expands to a maximum at which point the expansion stops. During this process Ω grows without bound. Gravitational energy wins out over its kinetic opponent.

If, on the other hand, Ω sets out slightly less than unity – and I mean slightly, one part in 1060 will do – the Universe evolves to a state where it is very close to zero. In this case kinetic energy is the winner  and Ω ends up on the ground, mathematically speaking.

In the compromise situation with total energy zero, this exact balance always applies. The universe is always described by Ω=1. It walks the cosmic tightrope. But any small deviation early on results in runaway expansion or catastrophic recollapse. To get anywhere close to Ω=1 now – I mean even within a factor ten either way – the Universe has to be finely tuned.

The evolution of Ω  is neatly illustrated by the following phase-plane diagram (taken from an old paper by Madsen & Ellis) describing a cosmological model involving a perflect fluid with an equation of state p=(γ-1)ρc2. This is what happens for γ>2/3 (which includes dust, relativistic particles, etc):

Phase_plane_crop

The top panel shows how the density parameter evolves with scale factor S; the bottom panel shows a completion of this portrait obtained using a transformation that allows the point at infinity to be plotted on a finite piece of paper (or computer screen).

As discussed above this picture shows that all these Friedmann models begin at S=0 with Ω arbitrarily close to unity and that the value of Ω=1 is an unstable fixed point, just like the situation of the particle at the top of the dome. If the universe has Ω=1 exactly at some time then it will stay that way forever. If it is perturbed, however, then it will eventually diverge and end up collapsing (Ω>1) or going into free expansion (Ω<1).  The smaller the initial perturbation,  the longer the system stays close to Ω=1.

The fact that all trajectories start at Ω(S=0)=1 means that one has to be very careful in assigning some sort of probability measure on this parameter, ust as is the case with the Norton’s Dome problem I started with. About twenty years ago, Guillaume Evrard and I tried to put this argument on firmer mathematical grounds by assigning a sensible prior probability to Ω based on nothing other than the assumption that our Universe is described by a Friedmann model.

The result we got was that it should be of the form

P(\Omega) \propto \Omega^{-1}(\Omega-1)^{-1}.

I was very pleased with this result, which is based on a principle advanced by physicist Ed Jaynes, but I have no space to go through the mathematics here. Note, however, that this prior has three interesting properties: it is infinite at Ω=0 and Ω=1, and it has a very long “tail” for very large values of Ω. It’s not a very well-behaved measure, in the sense that it can’t be integrated over, but that’s not an unusual state of affairs in this game. In fact it is what is called an improper prior.

I think of this prior as being the probabilistic equivalent of Mark Twain’s description of a horse:

dangerous at both ends, and uncomfortable in the middle.

Of course the prior probability doesn’t tell usall that much. To make further progress we have to make measurements, form a likelihood and then, like good Bayesians, work out the posterior probability . In fields where there is a lot of reliable data the prior becomes irrelevant and the likelihood rules the roost. We weren’t in that situation in 1995 – and we’re arguably still not – so we should still be guided, to some extent by what the prior tells us.

The form we found suggests that we can indeed reasonably assign most of our prior probability to the three special cases I have described. Since we also know that the Universe is neither totally empty nor ready to collapse, it does indicate that, in the absence of compelling evidence to the contrary, it is quite reasonable to have a prior preference for the case Ω=1.  Until the late 1980s there was indeed a strong ideological preference for models with Ω=1 exactly, but not because of the rather simple argument given above but because of the idea of cosmic inflation.

From recent observations we now know, or think we know, that Ω is roughly 0.26. To put it another way, this means that the Universe has roughly 26% of the density it would need to have to halt the cosmic expansion at some point in the future. Curiously, this corresponds precisely to the unlikely or “fine-tuned” case where our Universe is in between  two states in which we might have expected it to lie.

Even if you accept my argument that Ω=1 is a special case that is in principle possible, it is still the case that it requires the Universe to have been set up with very precisely defined initial conditions. Cosmology can always appeal to special initial conditions to get itself out of trouble because we don’t know how to describe the beginning properly, but it is much more satisfactory if properties of our Universe are explained by understanding the physical processes involved rather than by simply saying that “things are the way they are because they were the way they were.” The latter statement remains true, but it does not enhance our understanding significantly. It’s better to look for a more fundamental explanation because, even if the search is ultimately fruitless, we might turn over a few interesting stones along the way.

The reasoning behind cosmic inflation admits the possibility that, for a very short period in its very early stages, the Universe went through a phase where it was dominated by a third form of energy, vacuum energy. This forces the cosmic expansion to accelerate; this means basically that the equation of state of the contents of the universe is described by γ<2/3 rather than the case γ>2/3 described above. This drastically changes the arguments I gave above.

Without inflation the case with Ω=1 is unstable: a slight perturbation to the Universe sends it diverging towards a Big Crunch or a Big Freeze. While inflationary dynamics dominate, however, this case has a very different behaviour. Not only stable, it becomes an attractor to which all possible universes converge. Here’s what the phase plane looks like in this case:

Phase_plane+2_crop

 

Whatever the pre-inflationary initial conditions, the Universe will emerge from inflation with Ω very close to unity.

So how can we reconcile inflation with current observations that suggest a low matter density? The key to this question is that what inflation really does is expand the Universe by such a large factor that the curvature radius becomes infinitesimally small. If there is only “ordinary” matter in the Universe then this requires that the universe have the critical density. However, in Einstein’s theory the curvature is zero only if the total energy is zero. If there are other contributions to the global energy budget besides that associated with familiar material then one can have a low value of the matter density as well as zero curvature. The missing link is dark energy, and the independent evidence we now have for it provides a neat resolution of this problem.

Or does it? Although spatial curvature doesn’t really care about what form of energy causes it, it is surprising to some extent that the dark matter and dark energy densities are similar. To many minds this unexplained coincidence is a blemish on the face of an otherwise rather attractive structure.

It can be argued that there are initial conditions for non-inflationary models that lead to a Universe like ours. This is true. It is not logically necessary to have inflation in order for the Friedmann models to describe a Universe like the one we live in. On the other hand, it does seem to be a reasonable argument that the set of initial data that is consistent with observations is larger in models with inflation than in those without it. It is rational therefore to say that inflation is more probable to have happened than the alternative.

I am not totally convinced by this reasoning myself, because we still do not know how to put a reasonable measure on the space of possibilities existing prior to inflation. This would have to emerge from a theory of quantum gravity which we don’t have. Nevertheless, inflation is a truly beautiful idea that provides a framework for understanding the early Universe that is both elegant and compelling. So much so, in fact, that I almost believe it.

 

MaxEnt 2016: Some thoughts on the infinite

Posted in The Universe and Stuff with tags , , , on July 10, 2016 by telescoper

I thought I might do a few posts about matters arising from talks at this workshop I’m at. Today is devoted to tutorial talks, and the second one was given by John Skilling and in the course of it, he made some comments about the concept of infinity in science. These remarks weren’t really central to his talk, but struck me as an interesting subject for a few tangential remarks of my own.

Most of us – whether scientists or not – have an uncomfortable time coping with the concept of infinity. Physicists have had a particularly difficult relationship with the notion of boundlessness, as various kinds of pesky infinities keep cropping up in calculations. In most cases this this symptomatic of deficiencies in the theoretical foundations of the subject. Think of the ‘ultraviolet catastrophe‘ of classical statistical mechanics, in which the electromagnetic radiation produced by a black body at a finite temperature is calculated to be infinitely intense at infinitely short wavelengths; this signalled the failure of classical statistical mechanics and ushered in the era of quantum mechanics about a hundred years ago. Quantum field theories have other forms of pathological behaviour, with mathematical components of the theory tending to run out of control to infinity unless they are healed using the technique of renormalization. The general theory of relativity predicts that singularities in which physical properties become infinite occur in the centre of black holes and in the Big Bang that kicked our Universe into existence. But even these are regarded as indications that we are missing a piece of the puzzle, rather than implying that somehow infinity is a part of nature itself.

One exception to this rule is the field of cosmology. Somehow it seems natural at least to consider the possibility that our cosmos might be infinite, either in extent or duration, or both, or perhaps even be a multiverse comprising an infinite collection of sub-universes. If the Universe is defined as everything that exists, why should it necessarily be finite? Why should there be some underlying principle that restricts it to a size our human brains can cope with?

On the other hand, there are cosmologists who won’t allow infinity into their view of the Universe. A prominent example is George Ellis, a strong critic of the multiverse idea in particular, who frequently quotes David Hilbert

The final result then is: nowhere is the infinite realized; it is neither present in nature nor admissible as a foundation in our rational thinking—a remarkable harmony between being and thought.

This comment is quoted from a famous essay which seems to echo earlier remarks by Carl Friedrich Gauss which can be paraphrased:

Infinity is nothing more than a figure of speech which helps us talk about limits. The notion of a completed infinity doesn’t belong in mathematics.

This summarises Gauss’s reaction to Cantor’s Theory of Ininite Sets. But to every Gauss there’s an equal and opposite Leibniz

I am so in favor of the actual infinite that instead of admitting that Nature abhors it, as is commonly said, I hold that Nature makes frequent use of it everywhere, in order to show more effectively the perfections of its Author.

You see that it’s an argument with quite a long pedigree!

When I was at the National Astronomy Meeting in Llandudno a few years ago, I attended an excellent plenary session that featured a Gerald Whitrow Lecture, by Alex Vilenkin, entitled The Principle of Mediocrity. This was a talk based on some ideas from his book Many Worlds in One: The Search for Other Universese, in which he discusses some of the consequences of the so-called eternal inflation scenario, which leads to a variation of the multiverse idea in which the universe comprises an infinite collection of causally-disconnected “bubbles” with different laws of low-energy physics applying in each. Indeed, in Vilenkin’s vision, all possible configurations of all possible things are realised somewhere in this ensemble of mini-universes. An infinite number of National Astronomy Meetings, each with the same or different programmes, an infinite number of Vilenkins, etc etc.

One of the features of this scenario is that it brings the anthropic principle into play as a potential “explanation” for the apparent fine-tuning of our Universe that enables life to be sustained within it. We can only live in a domain wherein the laws of physics are compatible with life so it should be no surprise that’s what we find. There is an infinity of dead universes, but we don’t live there.

I’m not going to go on about the anthropic principle here, although it’s a subject that’s quite fun to write or, better still, give a talk about, especially if you enjoy winding people up! What I did want to say mention, though, is that Vilenkin correctly pointed out that three ingredients are needed to make this work:

  1. An infinite ensemble of realizations
  2. A discretizer
  3. A randomizer

Item 2 involves some sort of principle that ensures that the number of possible states of the system we’re talking about  is not infinite. A very simple example from  quantum physics might be the two spin states of an electron, up (↑) or down(↓). No “in-between” states are allowed, according to our tried-and-tested theories of quantum physics, so the state space is discrete.  In the more general context required for cosmology, the states are the allowed “laws of physics” ( i.e. possible  false vacuum configurations). The space of possible states is very much larger here, of course, and the theory that makes it discrete much less secure. In string theory, the number of false vacua is estimated at 10500. That’s certainly a very big number, but it’s not infinite so will do the job needed.

Item 3 requires a process that realizes every possible configuration across the ensemble in a “random” fashion. The word “random” is a bit problematic for me because I don’t really know what it’s supposed to mean. It’s a word that far too many scientists are content to hide behind, in my opinion. In this context, however, “random” really means that the assigning of states to elements in the ensemble must be ergodic, meaning that it must visit the entire state space with some probability. This is the kind of process that’s needed if an infinite collection of monkeys is indeed to type the (large but finite) complete works of shakespeare. It’s not enough that there be an infinite number and that the works of shakespeare be finite. The process of typing must also be ergodic.

Now it’s by no means obvious that monkeys would type ergodically. If, for example, they always hit two adjoining keys at the same time then the process would not be ergodic. Likewise it is by no means clear to me that the process of realizing the ensemble is ergodic. In fact I’m not even sure that there’s any process at all that “realizes” the string landscape. There’s a long and dangerous road from the (hypothetical) ensembles that exist even in standard quantum field theory to an actually existing “random” collection of observed things…

More generally, the mere fact that a mathematical solution of an equation can be derived does not mean that that equation describes anything that actually exists in nature. In this respect I agree with Alfred North Whitehead:

There is no more common error than to assume that, because prolonged and accurate mathematical calculations have been made, the application of the result to some fact of nature is absolutely certain.

It’s a quote I think some string theorists might benefit from reading!

Items 1, 2 and 3 are all needed to ensure that each particular configuration of the system is actually realized in nature. If we had an infinite number of realizations but with either infinite number of possible configurations or a non-ergodic selection mechanism then there’s no guarantee each possibility would actually happen. The success of this explanation consequently rests on quite stringent assumptions.

I’m a sceptic about this whole scheme for many reasons. First, I’m uncomfortable with infinity – that’s what you get for working with George Ellis, I guess. Second, and more importantly, I don’t understand string theory and am in any case unsure of the ontological status of the string landscape. Finally, although a large number of prominent cosmologists have waved their hands with commendable vigour, I have never seen anything even approaching a rigorous proof that eternal inflation does lead to realized infinity of  false vacua. If such a thing exists, I’d really like to hear about!

R.I.P. Tom Kibble (1932-2016)

Posted in The Universe and Stuff with tags , , , , , on June 2, 2016 by telescoper

Yet again, I find myself having to use this blog pass on some very sad news. Distinguished theoretical physicist Tom Kibble (below) passed away today, at the age of 83.

Kibble

Sir Thomas Walter Bannerman Kibble FRS (to give his full name) worked on  quantum field theory, especially the interface between high-energy particle physics and cosmology. He has worked on mechanisms ofsymmetry breaking, phase transitions and the topological defects (monopoles, cosmic strings or domain walls) that can be formed in some theories of the early Universe;  he is  probably most famous for introducing the idea of cosmic strings to modern cosmology in a paper with Mark Hindmarsh. Although there isn’t yet any observational support for this idea, it has generated a great deal of very interesting research.

Tom was indeed an extremely distinguished scientist, but what most people will remember best is that he was an absolutely lovely human being. Gently spoken and impeccably courteous, he was always receptive to new ideas and gave enormous support to younger researchers. He will be very sadly missed by friends and colleagues across the physics world.

Rest in peace, Tom Kibble (1932-2016).