Archive for the The Universe and Stuff Category

One Hundred Years of Zel’dovich

Posted in The Universe and Stuff with tags , , , , on March 12, 2014 by telescoper

Lovely weather today, but it’s also been an extremely busy day with meetings and teachings. I did realize yesterday however that I had forgotten to mark a very important centenary at the weekend. If I hadn’t been such a slacker that I took last Saturday off work I would probably have been reminded…

zeldovichThe great Russian physicist Yakov Borisovich Zel’dovich (left) was born on March 8th 1914, so had he lived he would have been 100 years old last Saturday. To us cosmologists Zel’dovich  is best known for his work on the large-scale structure of the Universe, but he only started to work on that subject relatively late in his career during the 1960s.  He in fact began his life in research as a physical chemist and arguably his greatest contribution to science was that he developed the first completely physically based theory of flame propagation (together with Frank-Kamenetskii). No doubt he also used insights gained from this work, together with his studies of detonation and shock waves, in the Soviet nuclear bomb programme in which he was a central figure, and which no doubt led to the chestful of medals he’s wearing in the photograph.

My own connection with Zel’dovich is primarily through his scientific descendants, principally his former student Sergei Shandarin, who has a faculty position at the University of Kansas. For example, I visited Kansas back in 1992 and worked on a project with Sergei and Adrian Melott which led to a paper published in 1993, the abstract of which makes it clear the debt it owed to the work of Ze’dovich.

The accuracy of various analytic approximations for following the evolution of cosmological density fluctuations into the nonlinear regime is investigated. The Zel’dovich approximation is found to be consistently the best approximation scheme. It is extremely accurate for power spectra characterized by n = -1 or less; when the approximation is ‘enhanced’ by truncating highly nonlinear Fourier modes the approximation is excellent even for n = +1. The performance of linear theory is less spectrum-dependent, but this approximation is less accurate than the Zel’dovich one for all cases because of the failure to treat dynamics. The lognormal approximation generally provides a very poor fit to the spatial pattern.

The Zel’dovich Approximation referred to in this abstract is based on an extremely simple idea but which, as we showed in the above paper, turns out to be extremely accurate at reproducing the morphology of the “cosmic web” of large-scale structure.

Zel’dovich passed away in 1987. I was a graduate student at that time and had never had the opportunity to meet him. If I had done so I’m sure I would have found him fascinating and intimidating in equal measure, as I admired his work enormously as did everyone I knew in the field of cosmology.  Anyway, a couple of years after his death a review paper written by himself and Sergei Shandarin was published, along with the note:

The Russian version of this review was finished in the summer of 1987. By the tragic death of Ya. B.Zeldovich on December 2, 1987, about four-fifths of the paper had been translated into English. Professor Zeldovich would have been 75 years old on March 8, 1989 and was vivid and creative until his last day. The theory of the structure of the universe was one of his favorite subjects, to which he made many note-worthy contributions over the last 20 years.

As one does if one is vain I looked down the reference list to see if any of my papers were cited. I’d only published one paper before Zel’dovich died so my hopes weren’t high. As it happens, though, my very first paper (Coles 1986) was there in the list. That’s still the proudest moment of my life!

reference

Anyway, this post gives me the opportunity to advertise that there is a special meeting called The Zel’dovich Universe coming up this summer in Tallinn, Estonia. It looks a really interesting conference and I really hope I can find the time to fit it into my schedule. I’ve never been to Estonia…

NOvA and Neutrinos

Posted in The Universe and Stuff with tags , , , , on March 11, 2014 by telescoper

Yesterday’s Grauniad blog post by Jon Butterworth about neutrino physics reminded me that I forgot to post about an important milestone in the development of the NOvA Experiment which involves several members of the Department of Physics and Astronomy in the School of Mathematical and Physical Sciences here at the University of Sussex. Here’s the University of Sussex’s press release on the subject, which came out a couple of weeks ago.

The NOvA experiment consists of two enormous  particle detectors, one at the Fermi National Accelerator Laboratory “Fermilab” near Chicago and the other in Minnesota. The neutrinos are actually generated  at Fermilab; the particle beam is then aimed  at the detectors the, one near the source at Fermilab, and the other in Ash River, Minnesota, near the Canadian border. The particles, sent in their billions every couple of seconds, complete the 500-mile trip in less than three milliseconds.

The point is that the experiment has managed for the first time to actually detect neutrinos through the 500 miles of rock separating the two ends of the experiment. This is obviously just a first step, but it’s equally obviously a crucial one.

Colleagues from Sussex University are strongly involved in  calibrating and fine-tuning the detector, which produces light when particles pass through it. Dr Abbey Waldron and PhD student Luke Vinton have developed a calibration procedure that uses known properties of  muons to calibrate precise measurements of the neutrinos, which are less well understood.  The detector sees 200,000 particle interactions a second, produced by cosmic rays bombarding the atmosphere, and scientists can’t record every single one. Sussex’s Dr Matthew Tamsett has developed a trigger algorithm that searches for events that look like neutrinos among the billions of other particle interactions.

Neutrino physics is an interesting subject to someone like me, who isn’t really a particle physicist. My impression of the field is that was fairly moribund until 1998 when the first measurement of atmospheric neutrino oscillations was announced. All of a sudden there was evidence that neutrinos can’t all be massless (as many of us had long assumed, at least as far as lecturing was concerned).  Now the humble neutrino is the subject of intense experimental activity, not only in the USA and UK but all around the world in a way that would have been difficult to predict twenty years ago.

But then, as the physicist Niels Bohr famously observed, “Prediction is very difficult. Especially about the future.”

That Fishy Saying of Einstein…

Posted in The Universe and Stuff with tags , , , on March 10, 2014 by telescoper

Einstein

There are two interesting things about the above Einstein meme that has been doing the rounds. The first is that there’s absolutely no evidence that I can find that Albert Einstein ever said the words attributed to him; that’s also true for the vast majority of Einstein quotes, in fact.

The other interesting thing (and I risk being labelled a pedant here) is that there are species of fish, such as the Mangrove Rivulus, that really are able to climb trees…

A Bit of Green Trivia..

Posted in Film, History, The Universe and Stuff, Uncategorized with tags , , , on March 8, 2014 by telescoper

Following on from yesterday’s post about George Green, I thought I’d add this little bit of Green trivia.

George Green’s sponsor and patron  was the mathematician Edward Bromhead, a Baronet and member of the landed gentry of the county of Lincolnshire. Two generations later in the Bromhead family you will find a certain Gonville Bromhead (presumably named after Gonville & Caius College, the Cambridge college that both Edward Bromhead and George Green attended). As a young man, in January 1879, Lt. Gonville Bromhead fought in the Battle of Rorke’s Drift. Almost a century later he was played by Michael Caine in the film Zulu.

Not a lot of people know that.

From Darkness to Green

Posted in History, The Universe and Stuff with tags , , , , , , , , , , on March 7, 2014 by telescoper

On Wednesday this week I spent a very enjoyable few hours in London attending the Inaugural Lecture of Professor Alan Heavens at South Kensington Technical College Imperial College, London. It was a very good lecture indeed, not only for its scientific content but also for  the plentiful touches of droll humour in which Alan specialises. It was also followed by a nice drinks reception and buffet. The talk was entitled Cosmology in the Dark, so naturally I had to mention it on this blog!

At the end of the lecture, the vote of thanks was delivered in typically effervescent style by the ebullient Prof. Malcolm Longair who actually supervised Alan’s undergraduate project at the Cavendish laboratory way back in 1980, if I recall the date correctly. In his speech, Malcolm referred to the following quote from History of the Theories of the Aether and Electricity (Whittaker, 1951) which he was kind enough to send me when I asked by email:

The century which elapsed between the death of Newton and the scientific activity of Green was the darkest in the history of (Cambridge) University. It is true that (Henry) Cavendish and (Thomas) Young were educated at Cambridge; but they, after taking their undergraduate courses, removed to London. In the entire period the only natural philosopher of distinction was (John) Michell; and for some reason which at this distance of time it is difficult to understand fully, Michell’s researches seem to have attracted little or no attention among his collegiate contemporaries and successors, who silently acquiesced when his discoveries were attributed to others, and allowed his name to perish entirely from the Cambridge tradition.

I wasn’t aware of this analysis previously, but it re-iterates something I have posted about before. It stresses the enormous historical importance of British mathematician and physicist George Green, who lived from 1793 until 1841, and who left a substantial legacy for modern theoretical physicists, in Green’s theorems and Green’s functions; he is also credited as being the first person to use the word “potential” in electrostatics.

Green was the son of a Nottingham miller who, amazingly, taught himself mathematics and did most of his best work, especially his remarkable Essay on the Application of mathematical Analysis to the theories of Electricity and Magnetism (1828) before starting his studies as an undergraduate at the University of Cambridge which he did at the age of 30. Lacking independent finance, Green could not go to University until his father died, whereupon he leased out the mill he inherited to pay for his studies.

Extremely unusually for English mathematicians of his time, Green taught himself from books that were published in France. This gave him a huge advantage over his national contemporaries in that he learned the form of differential calculus that originated with Leibniz, which was far more elegant than that devised by Isaac Newton (which was called the method of fluxions). Whittaker remarks upon this:

Green undoubtedly received his own early inspiration from . . . (the great French analysts), chiefly from Poisson; but in clearness of physical insight and conciseness of exposition he far excelled his masters; and the slight volume of his collected papers has to this day a charm which is wanting in their voluminous writings.

Great scientist though he was, Newton’s influence on the development of physics in Britain was not entirely positive, as the above quote makes clear. Newton was held in such awe, especially in Cambridge, that his inferior mathematical approach was deemed to be the “right” way to do calculus and generations of scholars were forced to use it. This held back British science until the use of fluxions was phased out. Green himself was forced to learn fluxions when he went as an undergraduate to Cambridge despite having already learned the better method.

Unfortunately, Green’s great pre-Cambridge work on mathematical physics didn’t reach wide circulation in the United Kingdom until after his death. William Thomson, later Lord Kelvin, found a copy of Green’s Essay in 1845 and promoted it widely as a work of fundamental importance. This contributed to the eventual emergence of British theoretical physics from the shadow cast by Isaac Newton which reached one of its heights just a few years later with the publication a fully unified theory of electricity and magnetism by James Clerk Maxwell.

But as to the possible reason for the lack of recognition for John Michell who was clearly an important figure in his own right (he was the person who first developed the concept of a black hole, for example) you’ll have to read Malcolm Longair’s forthcoming book on the History of the Cavendish Laboratory!

Is Inflation Testable?

Posted in The Universe and Stuff with tags , , , , , , , , on March 4, 2014 by telescoper

It seems the little poll about cosmic inflation I posted last week with humorous intent has ruffled a few feathers, but at least it gives me the excuse to wheel out an updated and edited version of an old piece I wrote on the subject.

Just over thirty  years ago a young physicist came up with what seemed at first to be an absurd idea: that, for a brief moment in the very distant past, just after the Big Bang, something weird happened to gravity that made it push rather than pull.  During this time the Universe went through an ultra-short episode of ultra-fast expansion. The physicist in question, Alan Guth, couldn’t prove that this “inflation” had happened nor could he suggest a compelling physical reason why it should, but the idea seemed nevertheless to solve several major problems in cosmology.

Three decades later, Guth is a professor at MIT and inflation is now well established as an essential component of the standard model of cosmology. But should it be? After all, we still don’t know what caused it and there is little direct evidence that it actually took place. Data from probes of the cosmic microwave background seem to be consistent with the idea that inflation happened, but how confident can we be that it is really a part of the Universe’s history?

According to the Big Bang theory, the Universe was born in a dense fireball which has been expanding and cooling for about 14 billion years. The basic elements of this theory have been in place for over eighty years, but it is only in the last decade or so that a detailed model has been constructed which fits most of the available observations with reasonable precision. The problem is that the Big Bang model is seriously incomplete. The fact that we do not understand the nature of the dark matter and dark energy that appears to fill the Universe is a serious shortcoming. Even worse, we have no way at all of describing the very beginning of the Universe, which appears in the equations used by cosmologists as a “singularity”- a point of infinite density that defies any sensible theoretical calculation. We have no way to define a priori the initial conditions that determine the subsequent evolution of the Big Bang, so we have to try to infer from observations, rather than deduce by theory, the parameters that govern it.

The establishment of the new standard model (known in the trade as the “concordance” cosmology) is now allowing astrophysicists to turn back the clock in order to understand the very early stages of the Universe’s history and hopefully to understand the answer to the ultimate question of what happened at the Big Bang itself and thus answer the question “How did the Universe Begin?”

Paradoxically, it is observations on the largest scales accessible to technology that provide the best clues about the earliest stages of cosmic evolution. In effect, the Universe acts like a microscope: primordial structures smaller than atoms are blown up to astronomical scales by the expansion of the Universe. This also allows particle physicists to use cosmological observations to probe structures too small to be resolved in laboratory experiments.

Our ability to reconstruct the history of our Universe, or at least to attempt this feat, depends on the fact that light travels with a finite speed. The further away we see a light source, the further back in time its light was emitted. We can now observe light from stars in distant galaxies emitted when the Universe was less than one-sixth of its current size. In fact we can see even further back than this using microwave radiation rather than optical light. Our Universe is bathed in a faint glow of microwaves produced when it was about one-thousandth of its current size and had a temperature of thousands of degrees, rather than the chilly three degrees above absolute zero that characterizes the present-day Universe. The existence of this cosmic background radiation is one of the key pieces of evidence in favour of the Big Bang model; it was first detected in 1964 by Arno Penzias and Robert Wilson who subsequently won the Nobel Prize for their discovery.

The process by which the standard cosmological model was assembled has been a gradual one, but the latest step was taken by the European Space Agency’s Planck mission . I’ve blogged about the implications of the Planck results for cosmic inflation in more technical detail here. In a nutshell, for several years this satellite mapped  the properties of the cosmic microwave background and how it varies across the sky. Small variations in the temperature of the sky result from sound waves excited in the hot plasma of the primordial fireball. These have characteristic properties that allow us to probe the early Universe in much the same way that solar astronomers use observations of the surface of the Sun to understand its inner structure,  a technique known as helioseismology. The detection of the primaeval sound waves is one of the triumphs of modern cosmology, not least because their amplitude tells us precisely how loud the Big Bang really was.

The pattern of fluctuations in the cosmic radiation also allows us to probe one of the exciting predictions of Einstein’s general theory of relativity: that space should be curved by the presence of matter or energy. Measurements from Planck and its predecessor WMAP reveal that our Universe is very special: it has very little curvature, and so has a very finely balanced energy budget: the positive energy of the expansion almost exactly cancels the negative energy relating of gravitational attraction. The Universe is (very nearly) flat.

The observed geometry of the Universe provides a strong piece of evidence that there is an mysterious and overwhelming preponderance of dark stuff in our Universe. We can’t see this dark matter and dark energy directly, but we know it must be there because we know the overall budget is balanced. If only economics were as simple as physics.

Computer Simulation of the Cosmic Web

The concordance cosmology has been constructed not only from observations of the cosmic microwave background, but also using hints supplied by observations of distant supernovae and by the so-called “cosmic web” – the pattern seen in the large-scale distribution of galaxies which appears to match the properties calculated from computer simulations like the one shown above, courtesy of Volker Springel. The picture that has emerged to account for these disparate clues is consistent with the idea that the Universe is dominated by a blend of dark energy and dark matter, and in which the early stages of cosmic evolution involved an episode of accelerated expansion called inflation.

A quarter of a century ago, our understanding of the state of the Universe was much less precise than today’s concordance cosmology. In those days it was a domain in which theoretical speculation dominated over measurement and observation. Available technology simply wasn’t up to the task of performing large-scale galaxy surveys or detecting slight ripples in the cosmic microwave background. The lack of stringent experimental constraints made cosmology a theorists’ paradise in which many imaginative and esoteric ideas blossomed. Not all of these survived to be included in the concordance model, but inflation proved to be one of the hardiest (and indeed most beautiful) flowers in the cosmological garden.

Although some of the concepts involved had been formulated in the 1970s by Alexei Starobinsky, it was Alan Guth who in 1981 produced the paper in which the inflationary Universe picture first crystallized. At this time cosmologists didn’t know that the Universe was as flat as we now think it to be, but it was still a puzzle to understand why it was even anywhere near flat. There was no particular reason why the Universe should not be extremely curved. After all, the great theoretical breakthrough of Einstein’s general theory of relativity was the realization that space could be curved. Wasn’t it a bit strange that after all the effort needed to establish the connection between energy and curvature, our Universe decided to be flat? Of all the possible initial conditions for the Universe, isn’t this very improbable? As well as being nearly flat, our Universe is also astonishingly smooth. Although it contains galaxies that cluster into immense chains over a hundred million light years long, on scales of billions of light years it is almost featureless. This also seems surprising. Why is the celestial tablecloth so immaculately ironed?

Guth grappled with these questions and realized that they could be resolved rather elegantly if only the force of gravity could be persuaded to change its sign for a very short time just after the Big Bang. If gravity could push rather than pull, then the expansion of the Universe could speed up rather than slow down. Then the Universe could inflate by an enormous factor (1060 or more) in next to no time and, even if it were initially curved and wrinkled, all memory of this messy starting configuration would be lost. Our present-day Universe would be very flat and very smooth no matter how it had started out.

But how could this bizarre period of anti-gravity be realized? Guth hit upon a simple physical mechanism by which inflation might just work in practice. It relied on the fact that in the extreme conditions pertaining just after the Big Bang, matter does not behave according to the classical laws describing gases and liquids but instead must be described by quantum field theory. The simplest type of quantum field is called a scalar field; such objects are associated with particles that have no spin. Modern particle theory involves many scalar fields which are not observed in low-energy interactions, but which may well dominate affairs at the extreme energies of the primordial fireball.

Classical fluids can undergo what is called a phase transition if they are heated or cooled. Water for example, exists in the form of steam at high temperature but it condenses into a liquid as it cools. A similar thing happens with scalar fields: their configuration is expected to change as the Universe expands and cools. Phase transitions do not happen instantaneously, however, and sometimes the substance involved gets trapped in an uncomfortable state in between where it was and where it wants to be. Guth realized that if a scalar field got stuck in such a “false” state, energy – in a form known as vacuum energy – could become available to drive the Universe into accelerated expansion.We don’t know which scalar field of the many that may exist theoretically is responsible for generating inflation, but whatever it is, it is now dubbed the inflaton.

This mechanism is an echo of a much earlier idea introduced to the world of cosmology by Albert Einstein in 1916. He didn’t use the term vacuum energy; he called it a cosmological constant. He also didn’t imagine that it arose from quantum fields but considered it to be a modification of the law of gravity. Nevertheless, Einstein’s cosmological constant idea was incorporated by Willem de Sitter into a theoretical model of an accelerating Universe. This is essentially the same mathematics that is used in modern inflationary cosmology.  The connection between scalar fields and the cosmological constant may also eventually explain why our Universe seems to be accelerating now, but that would require a scalar field with a much lower effective energy scale than that required to drive inflation. Perhaps dark energy is some kind of shadow of the inflaton

Guth wasn’t the sole creator of inflation. Andy Albrecht and Paul Steinhardt, Andrei Linde, Alexei Starobinsky, and many others, produced different and, in some cases, more compelling variations on the basic theme. It was almost as if it was an idea whose time had come. Suddenly inflation was an indispensable part of cosmological theory. Literally hundreds of versions of it appeared in the leading scientific journals: old inflation, new inflation, chaotic inflation, extended inflation, and so on. Out of this activity came the realization that a phase transition as such wasn’t really necessary, all that mattered was that the field should find itself in a configuration where the vacuum energy dominated. It was also realized that other theories not involving scalar fields could behave as if they did. Modified gravity theories or theories with extra space-time dimensions provide ways of mimicking scalar fields with rather different physics. And if inflation could work with one scalar field, why not have inflation with two or more? The only problem was that there wasn’t a shred of evidence that inflation had actually happened.

This episode provides a fascinating glimpse into the historical and sociological development of cosmology in the eighties and nineties. Inflation is undoubtedly a beautiful idea. But the problems it solves were theoretical problems, not observational ones. For example, the apparent fine-tuning of the flatness of the Universe can be traced back to the absence of a theory of initial conditions for the Universe. Inflation turns an initially curved universe into a flat one, but the fact that the Universe appears to be flat doesn’t prove that inflation happened. There are initial conditions that lead to present-day flatness even without the intervention of an inflationary epoch. One might argue that these are special and therefore “improbable”, and consequently that it is more probable that inflation happened than that it didn’t. But on the other hand, without a proper theory of the initial conditions, how can we say which are more probable? Based on this kind of argument alone, we would probably never really know whether we live in an inflationary Universe or not.

But there is another thread in the story of inflation that makes it much more compelling as a scientific theory because it makes direct contact with observations. Although it was not the original motivation for the idea, Guth and others realized very early on that if a scalar field were responsible for inflation then it should be governed by the usual rules governing quantum fields. One of the things that quantum physics tells us is that nothing evolves entirely smoothly. Heisenberg’s famous Uncertainty Principle imposes a degree of unpredictability of the behaviour of the inflaton. The most important ramification of this is that although inflation smooths away any primordial wrinkles in the fabric of space-time, in the process it lays down others of its own. The inflationary wrinkles are really ripples, and are caused by wave-like fluctuations in the density of matter travelling through the Universe like sound waves travelling through air. Without these fluctuations the cosmos would be smooth and featureless, containing no variations in density or pressure and therefore no sound waves. Even if it began in a fireball, such a Universe would be silent. Inflation puts the Bang in Big Bang.

The acoustic oscillations generated by inflation have a broad spectrum (they comprise oscillations with a wide range of wavelengths), they are of small amplitude (about one hundred thousandth of the background); they are spatially random and have Gaussian statistics (like waves on the surface of the sea; this is the most disordered state); they are adiabatic (matter and radiation fluctuate together) and they are formed coherently.  This last point is perhaps the most important. Because inflation happens so rapidly all of the acoustic “modes” are excited at the same time. Hitting a metal pipe with a hammer generates a wide range of sound frequencies, but all the different modes of the start their oscillations at the same time. The result is not just random noise but something moderately tuneful. The Big Bang wasn’t exactly melodic, but there is a discernible relic of the coherent nature of the sound waves in the pattern of cosmic microwave temperature fluctuations seen in the Cosmic Microwave Background. The acoustic peaks seen in the  Planck  angular spectrum  provide compelling evidence that whatever generated the pattern did so coherently.

Planck_power_spectrum_orig
There are very few alternative theories on the table that are capable of reproducing these results, but does this mean that inflation really happened? Do they “prove” inflation is correct? More generally, is the idea of inflation even testable?

So did inflation really happen? Does Planck prove it? Will we ever know?

It is difficult to talk sensibly about scientific proof of phenomena that are so far removed from everyday experience. At what level can we prove anything in astronomy, even on the relatively small scale of the Solar System? We all accept that the Earth goes around the Sun, but do we really even know for sure that the Universe is expanding? I would say that the latter hypothesis has survived so many tests and is consistent with so many other aspects of cosmology that it has become, for pragmatic reasons, an indispensable part our world view. I would hesitate, though, to say that it was proven beyond all reasonable doubt. The same goes for inflation. It is a beautiful idea that fits snugly within the standard cosmological and binds many parts of it together. But that doesn’t necessarily make it true. Many theories are beautiful, but that is not sufficient to prove them right.

When generating theoretical ideas scientists should be fearlessly radical, but when it comes to interpreting evidence we should all be unflinchingly conservative. The Planck measurements have also provided a tantalizing glimpse into the future of cosmology, and yet more stringent tests of the standard framework that currently underpins it. Primordial fluctuations produce not only a pattern of temperature variations over the sky, but also a corresponding pattern of polarization. This is fiendishly difficult to measure, partly because it is such a weak signal (only a few percent of the temperature signal) and partly because the primordial microwaves are heavily polluted by polarized radiation from our own Galaxy. Polarization data from Planck are yet to be released; the fiendish data analysis challenge involved is the reason for the delay.  But there is a crucial target that justifies these endeavours. Inflation does not just produce acoustic waves, it also generates different modes of fluctuation, called gravitational waves, that involve twisting deformations of space-time. Inflationary models connect the properties of acoustic and gravitational fluctuations so if the latter can be detected the implications for the theory are profound. Gravitational waves produce very particular form of polarization pattern (called the B-mode) which can’t be generated by acoustic waves so this seems a promising way to test inflation. Unfortunately the B-mode signal is expected to be very weak and the experience of WMAP suggests it might be swamped by foregrounds. But it is definitely worth a go, because it would add considerably to the evidence in favour of inflation as an element of physical reality.

But would even detection of primordial gravitational waves really test inflation? Not really. The problem with inflation is that it is a name given to a very general idea, and there are many (perhaps infinitely many) different ways of implementing the details, so one can devise versions of the inflationary scenario that produce a wide range of outcomes. It is therefore unlikely that there will be a magic bullet that will kill inflation dead. What is more likely is a gradual process of reducing the theoretical slack as much as possible with observational data, such as is happening in particle physics. For example, we have not yet identified the inflaton field (nor indeed any reasonable candidate for it) but we are gradually improving constraints on the allowed parameter space. Progress in this mode of science is evolutionary not revolutionary.

Many critics of inflation argue that it is not a scientific theory because it is not falsifiable. I don’t think falsifiability is a useful concept in this context; see my many posts relating to Karl Popper. Testability is a more appropriate criterion. What matters is that we have a systematic way of deciding which of a set of competing models is the best when it comes to confrontation with data. In the case of inflation we simply don’t have a compelling model to test it against. For the time being therefore, like it or not, cosmic inflation is clearly the best model we have. Maybe someday a worthy challenger will enter the arena, but this has not happened yet.

Most working cosmologists are as aware of the difficulty of testing inflation as they are of its elegance. There are also those  who talk as if inflation were an absolute truth, and those who assert that it is not a proper scientific theory (because it isn’t falsifiable). I can’t agree with either of these factions. The truth is that we don’t know how the Universe really began; we just work on the best ideas available and try to reduce our level of ignorance in any way we can. We can hardly expect  the secrets of the Universe to be so easily accessible to our little monkey brains.

Inflationary Opinion Poll

Posted in The Universe and Stuff with tags , , , , , on February 28, 2014 by telescoper

Compare and contrast this abstract of a paper on the arXiv from Guth et al. from last year:

Models of cosmic inflation posit an early phase of accelerated expansion of the universe, driven by the dynamics of one or more scalar fields in curved spacetime. Though detailed assumptions about fields and couplings vary across models, inflation makes specific, quantitative predictions for several observable quantities, such as the flatness parameter (Ωk=1−Ω) and the spectral tilt of primordial curvature perturbations (ns−1=dlnPR/dlnk), among others—predictions that match the latest observations from the Planck satellite to very good precision. In the light of data from Planck  as well as recent theoretical developments in the study of eternal inflation and the multiverse, we address recent criticisms of inflation by Ijjas, Steinhardt, and Loeb. We argue that their conclusions rest on several problematic assumptions, and we conclude that cosmic inflation is on a stronger footing than ever before.

and this one, just out,  by Ijjas et al.:

Classic inflation, the theory described in textbooks, is based on the idea that, beginning from typical initial conditions and assuming a simple inflaton potential with a minimum of fine-tuning, inflation can create exponentially large volumes of space that are generically homogeneous, isotropic and flat, with nearly scale-invariant spectra of density and gravitational wave fluctuations that are adiabatic, Gaussian and have generic predictable properties. In a recent paper, we showed that, in addition to having certain conceptual problems known for decades, classic inflation is for the first time also disfavored by data, specifically the most recent data from WMAP, ACT and Planck2013. Guth, Kaiser and Nomura and Linde have each recently published critiques of our paper, but, as made clear here, we all agree about one thing: the problematic state of classic inflation. Instead, they describe an alternative inflationary paradigm that revises the assumptions and goals of inflation, and perhaps of science generally.

I’m not sure how much of a “schism” (to use Ijjas et al.’s word) there actually is, but it seems like an appropriate subject for a totally unscientific Friday lunchtime opinion poll:

The faces of highly followed astronomers on Twitter

Posted in The Universe and Stuff with tags , on February 28, 2014 by telescoper

On Twitter? Looking for an astronomer or astrophysicist to follow? Here’s a Rogues Gallery…

 

 

Sussex University – the Place for Undergraduate Physics Research!

Posted in Education, The Universe and Stuff with tags , , , , , , , on February 27, 2014 by telescoper

One of the courses we offer in the School of Physics & Astronomy here at the University of Sussex is the integrated Masters in Physics with a Research Placement. Aimed at high-flying students with ambitions to become research physicists, this programme includes a paid research placement as a Junior Research Associate each summer vacation for the duration of the course; that means between Years 1 & 2, Years 2 & 3 and Years 3 & 4 . This course has proved extremely attractive to a large number of very talented students and it exemplifies the way the Department of Physics & Astronomy integrates world-class research with its teaching in a uniquely successful and imaginative way.

Here’s a little video made by the University that features Sophie Williamson, who is currently in her second year (and who also in the class to whom I’m currently teaching a module on Theoretical Physics:

This week we had some very good news about another of our undergraduate researchers, Talitha Bromwich, who is now in the final year of her MPhys degree, and is pictured below with her supervisor Dr Simon Peeters:

Talitha Bromwich with her JRA supervisor Dr Simon Peeters at 'Posters in Parliament' event 25 Feb 14

Talitha spent last summer working on the DEAP3600 dark-matter detector after being selected for the University’s Junior Research Associate scheme. Her project won first prize at the University’s JRA poster exhibition last October, and she was then chosen to present her findings – alongside undergraduate researchers from 22 other universities – in Westminster yesterday as part of the annual Posters in Parliament exhibition, organized under the auspices of the British Conference of Undergraduate Research (BCUR).

A judging panel – consisting of Ben Wallace MP, Conservative MP for Wyre and Preston North; Sean Coughlan, Education Correspondent for the BBC; and Professor Julio Rivera, President of the US Council of Undergraduate Research; and Katherine Harrington of the Higher Education Academy – decided to award Talitha’s project First Prize in this extremely prestigious competition.

Congratulations to Talitha for her prizewinning project! I’m sure her outstanding success will inspire future generations of Sussex undergraduates too!

Galaxies, Glow-worms and Chicken Eyes

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , , , , on February 26, 2014 by telescoper

I just came across a news item based on a research article in Physical Review E by Jiao et al. with the abstract:

Optimal spatial sampling of light rigorously requires that identical photoreceptors be arranged in perfectly regular arrays in two dimensions. Examples of such perfect arrays in nature include the compound eyes of insects and the nearly crystalline photoreceptor patterns of some fish and reptiles. Birds are highly visual animals with five different cone photoreceptor subtypes, yet their photoreceptor patterns are not perfectly regular. By analyzing the chicken cone photoreceptor system consisting of five different cell types using a variety of sensitive microstructural descriptors, we find that the disordered photoreceptor patterns are “hyperuniform” (exhibiting vanishing infinite-wavelength density fluctuations), a property that had heretofore been identified in a unique subset of physical systems, but had never been observed in any living organism. Remarkably, the patterns of both the total population and the individual cell types are simultaneously hyperuniform. We term such patterns “multihyperuniform” because multiple distinct subsets of the overall point pattern are themselves hyperuniform. We have devised a unique multiscale cell packing model in two dimensions that suggests that photoreceptor types interact with both short- and long-ranged repulsive forces and that the resultant competition between the types gives rise to the aforementioned singular spatial features characterizing the system, including multihyperuniformity. These findings suggest that a disordered hyperuniform pattern may represent the most uniform sampling arrangement attainable in the avian system, given intrinsic packing constraints within the photoreceptor epithelium. In addition, they show how fundamental physical constraints can change the course of a biological optimization process. Our results suggest that multihyperuniform disordered structures have implications for the design of materials with novel physical properties and therefore may represent a fruitful area for future research.

The point made in the paper is that the photoreceptors found in the eyes of chickens possess a property called disordered hyperuniformity which means that the appear disordered on small scales but exhibit order over large distances. Here’s an illustration:

chicken_eyes

It’s an interesting paper, but I’d like to quibble about something it says in the accompanying news story. The caption with the above diagram states

Left: visual cell distribution in chickens; right: a computer-simulation model showing pretty much the exact same thing. The colored dots represent the centers of the chicken’s eye cells.

Well, as someone who has spent much of his research career trying to discern and quantify patterns in collections of points – in my case they tend to be galaxies rather than photoreceptors – I find it difficult to defend the use of the phrase “pretty much the exact same thing”. It’s notoriously difficult to look at realizations of stochastic point processes and decided whether they are statistically similar or not. For that you generally need quite sophisticated mathematical analysis.  In fact, to my eye, the two images above don’t look at all like “pretty much the exact same thing”. I’m not at all sure that the model works as well as it is claimed, as the statistical analysis presented in the paper is relatively simple: I’d need to see some more quantitative measures of pattern morphology and clustering, especially higher-order correlation functions, before I’m convinced.

Anyway, all this reminded me of a very old post of mine about the difficulty of discerning patterns in distributions of points. Take the two (not very well scanned)  images here as examples:

points

You will have to take my word for it that one of these is a realization of a two-dimensional Poisson point process (which is, in a well-defined sense completely “random”) and the other contains spatial correlations between the points. One therefore has a real pattern to it, and one is a realization of a completely unstructured random process.

I sometimes show this example in popular talks and get the audience to vote on which one is the random one. The vast majority usually think that the one on the right is the one that is random and the left one is the one with structure to it. It is not hard to see why. The right-hand pattern is very smooth (what one would naively expect for a constant probability of finding a point at any position in the two-dimensional space) , whereas the  left one seems to offer a profusion of linear, filamentary features and densely concentrated clusters.

In fact, it’s the left picture that was generated by a Poisson process using a Monte Carlo random number generator. All the structure that is visually apparent is imposed by our own sensory apparatus, which has evolved to be so good at discerning patterns that it finds them when they’re not even there!

The right process is also generated by a Monte Carlo technique, but the algorithm is more complicated. In this case the presence of a point at some location suppresses the probability of having other points in the vicinity. Each event has a zone of avoidance around it; the points are therefore anticorrelated. The result of this is that the pattern is much smoother than a truly random process should be. In fact, this simulation has nothing to do with galaxy clustering really. The algorithm used to generate it was meant to mimic the behaviour of glow-worms (a kind of beetle) which tend to eat each other if they get too close. That’s why they spread themselves out in space more uniformly than in the random pattern. In fact, the tendency displayed in this image of the points to spread themselves out more smoothly than a random distribution is in in some ways reminiscent of the chicken eye problem.

The moral of all this is that people are actually pretty hopeless at understanding what “really” random processes look like, probably because the word random is used so often in very imprecise ways and they don’t know what it means in a specific context like this. The point about random processes, even simpler ones like repeated tossing of a coin, is that coincidences happen much more frequently than one might suppose. By the same token, people are also pretty hopeless at figuring out whether two distributions of points resemble each other in some kind of statistical sense, because that can only be made precise if one defines some specific quantitative measure of clustering pattern, which is not easy to do.