Archive for the The Universe and Stuff Category

Nervous

Posted in Finance, Science Politics, The Universe and Stuff with tags , , on November 22, 2015 by telescoper

The outcome of the 2015 Comprehensive Spending Review is to be announced shortly (on Wednesday 25th November), a fact which suggested this piece of music. It’s a solo piano piece by the late great Mal Waldron. Among many other things, Mal Waldron was Billie Holiday’s regular accompanist from 1957 until her death in 1959 and it was during that time he was booked to appear on a famous all-star TV Jazz broadcast called The Sound of Jazz from which this solo performance is taken. It’s an original composition by the pianist, and it’s called Nervous.

p.s. I did a blog post some time ago about Billie Holliday’s heartbreaking last performance with Lester Young, which also appeared on The Sound of Jazz. You can find it here.

Fourier-transforming the Universe

Posted in The Universe and Stuff with tags , , , on November 20, 2015 by telescoper

Following the little post I did on Tuesday in reaction to a nice paper on the arXiv by Pontzen et al., my attention was drawn today to another paper e related to the comment I made about using Fourier phases as a diagnostic of pattern morphology. The abstract of this one, by Way et al., is as follows:

We compute the complex 3D Fourier transform of the spatial galaxy distribution in a volume-limited sample of the Sloan Digital Sky Survey redshift survey. The direct unbinned transform yields results quite similar to those from the Fast Fourier Transform (FFT) of finely binned galaxy positions. In both cases deconvolution of the sampling window function yields estimates of the true 3D transform. The Fourier amplitudes resulting from this simple procedure yield power spectrum estimates consistent with those from other much more complicated approaches. We demonstrate how the corresponding Fourier phase spectrum lays out a simple and complete characterization of non-Gaussianity that is more easily interpretable than the tangled, incomplete multi-point methods conventionally used. Measurements based on the complex Fourier transform indicate departures from exact homogeneity and isotropy at the level of 1% or less. Our model-independent analysis avoids statistical interpretations, which have no meaning without detailed assumptions about a hypothetical process generating the initial cosmic density fluctuations.

It’s obviously an excellent piece of work because it cites a lot of my papers!

But seriously I think it’s very exciting that we now have data sets of sufficient size and quality to allow us to go beyond the relatively crude statistical description provided by the power spectrum.

 

Inverted Cosmology

Posted in The Universe and Stuff on November 17, 2015 by telescoper

Just time for a quick post about a neat little paper by Pontzen et al. that has appeared on the arXiv. Here is the abstract:

 

inversionThe abstract is a model of clarity so there’s no need to add further explanation here. Having A and B simulations in which initial overdensities and underdensities are swapped but everything else is preserved allows a number of interesting things to be studied.

When I read the paper it struck me that it would be fun to use “paired” simulations like this to study statistical properties of the evolved density field that go beyond the usual power spectra discussed in the paper; you can find a nice review of power spectra and their uses here.

Here’s what I mean. Take a look at these two N-body computer simulations of large-scale structure:

The one on the left is a proper simulation of the “cosmic web” which is at least qualitatively realistic, in that in contains filaments, clusters and voids pretty much like what is observed in galaxy surveys.

To make the picture on the right I first took the Fourier transform of the original simulation shown on the left. This approach follows the best advice I ever got from my thesis supervisor: “if you can’t think of anything else to do, try Fourier-transforming everything”. Anyway, each Fourier mode is complex and can therefore be characterized by an amplitude and a phase (the modulus and argument of the complex quantity). What I did next was to randomly reshuffle all the phases while leaving the amplitudes alone. I then performed the inverse Fourier transform to construct the image shown on the right.

What this procedure does is to produce a new image which has exactly the same power spectrum as the first. You might be surprised by how little the pattern on the right resembles that on the left, given that they share this property; the distribution on the right is much fuzzier. In fact, the sharply delineated features are produced by mode-mode correlations and are therefore not well described by the power spectrum, which involves only the amplitude of each separate mode. These features are manifestations of non-linear dynamics and are not described by linear perturbation theory.

If you’re confused by this, consider the Fourier transforms of (a) white noise and (b) a Dirac delta-function. Both produce flat power-spectra, but they look very different in real space because in (b) all the Fourier modes are correlated in such away that they are in phase at the one location where the pattern is not zero; everywhere else they interfere destructively. In (a) the phases are distributed randomly.

The moral of this is that there is much more to the pattern of galaxy clustering than meets the power spectrum, i.e. all the information contained in the distribution of phases. However, studying the evolution of Fourier phases in the context of non-linear gravitational evolution is quite tricky for a number of technical reasons. Note that the “paired” simulations of Pontzen et al. are generated in such a way that the A and B simulations also have the same power spectrum, but unlike those shown above, have the same type of morphology, which might allow one to finesse some of these difficulties and separate out the effect of non-linear dynamics from the choice of initial power spectrum in a potentially interesting way.

Just a thought.

Life as a Condition of Cosmology

Posted in The Universe and Stuff with tags , , , , , , , on November 7, 2015 by telescoper

Trigger Warnings: Bayesian Probability and the Anthropic Principle!

Once upon a time I was involved in setting up a cosmology conference in Valencia (Spain). The principal advantage of being among the organizers of such a meeting is that you get to invite yourself to give a talk and to choose the topic. On this particular occasion, I deliberately abused my privilege and put myself on the programme to talk about the “Anthropic Principle”. I doubt if there is any subject more likely to polarize a scientific audience than this. About half the participants present in the meeting stayed for my talk. The other half ran screaming from the room. Hence the trigger warnings on this post. Anyway, I noticed a tweet this morning from Jon Butterworth advertising a new blog post of his on the very same subject so I thought I’d while away a rainy November afternoon with a contribution of my own.

In case you weren’t already aware, the Anthropic Principle is the name given to a class of ideas arising from the suggestion that there is some connection between the material properties of the Universe as a whole and the presence of human life within it. The name was coined by Brandon Carter in 1974 as a corrective to the “Copernican Principle” that man does not occupy a special place in the Universe. A naïve application of this latter principle to cosmology might lead us to think that we could have evolved in any of the myriad possible Universes described by the system of Friedmann equations. The Anthropic Principle denies this, because life could not have evolved in all possible versions of the Big Bang model. There are however many different versions of this basic idea that have different logical structures and indeed different degrees of credibility. It is not really surprising to me that there is such a controversy about this particular issue, given that so few physicists and astronomers take time to study the logical structure of the subject, and this is the only way to assess the meaning and explanatory value of propositions like the Anthropic Principle. My former PhD supervisor, John Barrow (who is quoted in John Butterworth’s post) wrote the definite text on this topic together with Frank Tipler to which I refer you for more background. What I want to do here is to unpick this idea from a very specific perspective and show how it can be understood quite straightfowardly in terms of Bayesian reasoning. I’ll begin by outlining this form of inferential logic.

I’ll start with Bayes’ theorem which for three logical propositions (such as statements about the values of parameters in theory) A, B and C can be written in the form

P(B|AC) = K^{-1}P(B|C)P(A|BC) = K^{-1} P(AB|C)

where

K=P(A|C).

This is (or should be!)  uncontroversial as it is simply a result of the sum and product rules for combining probabilities. Notice, however, that I’ve not restricted it to two propositions A and B as is often done, but carried throughout an extra one (C). This is to emphasize the fact that, to a Bayesian, all probabilities are conditional on something; usually, in the context of data analysis this is a background theory that furnishes the framework within which measurements are interpreted. If you say this makes everything model-dependent, then I’d agree. But every interpretation of data in terms of parameters of a model is dependent on the model. It has to be. If you think it can be otherwise then I think you’re misguided.

In the equation,  P(B|C) is the probability of B being true, given that C is true . The information C need not be definitely known, but perhaps assumed for the sake of argument. The left-hand side of Bayes’ theorem denotes the probability of B given both A and C, and so on. The presence of C has not changed anything, but is just there as a reminder that it all depends on what is being assumed in the background. The equation states  a theorem that can be proved to be mathematically correct so it is – or should be – uncontroversial.

To a Bayesian, the entities A, B and C are logical propositions which can only be either true or false. The entities themselves are not blurred out, but we may have insufficient information to decide which of the two possibilities is correct. In this interpretation, P(A|C) represents the degree of belief that it is consistent to hold in the truth of A given the information C. Probability is therefore a generalization of the “normal” deductive logic expressed by Boolean algebra: the value “0” is associated with a proposition which is false and “1” denotes one that is true. Probability theory extends  this logic to the intermediate case where there is insufficient information to be certain about the status of the proposition.

A common objection to Bayesian probability is that it is somehow arbitrary or ill-defined. “Subjective” is the word that is often bandied about. This is only fair to the extent that different individuals may have access to different information and therefore assign different probabilities. Given different information C and C′ the probabilities P(A|C) and P(A|C′) will be different. On the other hand, the same precise rules for assigning and manipulating probabilities apply as before. Identical results should therefore be obtained whether these are applied by any person, or even a robot, so that part isn’t subjective at all.

In fact I’d go further. I think one of the great strengths of the Bayesian interpretation is precisely that it does depend on what information is assumed. This means that such information has to be stated explicitly. The essential assumptions behind a result can be – and, regrettably, often are – hidden in frequentist analyses. Being a Bayesian forces you to put all your cards on the table.

To a Bayesian, probabilities are always conditional on other assumed truths. There is no such thing as an absolute probability, hence my alteration of the form of Bayes’s theorem to represent this. A probability such as P(A) has no meaning to a Bayesian: there is always conditioning information. For example, if  I blithely assign a probability of 1/6 to each face of a dice, that assignment is actually conditional on me having no information to discriminate between the appearance of the faces, and no knowledge of the rolling trajectory that would allow me to make a prediction of its eventual resting position.

In tbe Bayesian framework, probability theory  becomes not a branch of experimental science but a branch of logic. Like any branch of mathematics it cannot be tested by experiment but only by the requirement that it be internally self-consistent. This brings me to what I think is one of the most important results of twentieth century mathematics, but which is unfortunately almost unknown in the scientific community. In 1946, Richard Cox derived the unique generalization of Boolean algebra under the assumption that such a logic must involve associated a single number with any logical proposition. The result he got is beautiful and anyone with any interest in science should make a point of reading his elegant argument. It turns out that the only way to construct a consistent logic of uncertainty incorporating this principle is by using the standard laws of probability. There is no other way to reason consistently in the face of uncertainty than probability theory. Accordingly, probability theory always applies when there is insufficient knowledge for deductive certainty. Probability is inductive logic.

This is not just a nice mathematical property. This kind of probability lies at the foundations of a consistent methodological framework that not only encapsulates many common-sense notions about how science works, but also puts at least some aspects of scientific reasoning on a rigorous quantitative footing. This is an important weapon that should be used more often in the battle against the creeping irrationalism one finds in society at large.

To see how the Bayesian approach provides a methodology for science, let us consider a simple example. Suppose we have a hypothesis H (some theoretical idea that we think might explain some experiment or observation). We also have access to some data D, and we also adopt some prior information I (which might be the results of other experiments and observations, or other working assumptions). What we want to know is how strongly the data D supports the hypothesis H given my background assumptions I. To keep it easy, we assume that the choice is between whether H is true or H is false. In the latter case, “not-H” or H′ (for short) is true. If our experiment is at all useful we can construct P(D|HI), the probability that the experiment would produce the data set D if both our hypothesis and the conditional information are true.

The probability P(D|HI) is called the likelihood; to construct it we need to have   some knowledge of the statistical errors produced by our measurement. Using Bayes’ theorem we can “invert” this likelihood to give P(H|DI), the probability that our hypothesis is true given the data and our assumptions. The result looks just like we had in the first two equations:

P(H|DI) = K^{-1}P(H|I)P(D|HI) .

Now we can expand the “normalising constant” K because we know that either H or H′ must be true. Thus

K=P(D|I)=P(H|I)P(D|HI)+P(H^{\prime}|I) P(D|H^{\prime}I)

The P(H|DI) on the left-hand side of the first expression is called the posterior probability; the right-hand side involves P(H|I), which is called the prior probability and the likelihood P(D|HI). The principal controversy surrounding Bayesian inductive reasoning involves the prior and how to define it, which is something I’ll comment on in a future post.

The Bayesian recipe for testing a hypothesis assigns a large posterior probability to a hypothesis for which the product of the prior probability and the likelihood is large. It can be generalized to the case where we want to pick the best of a set of competing hypothesis, say H1 …. Hn. Note that this need not be the set of all possible hypotheses, just those that we have thought about. We can only choose from what is available. The hypothesis may be relatively simple, such as that some particular parameter takes the value x, or they may be composite involving many parameters and/or assumptions. For instance, the Big Bang model of our universe is a very complicated hypothesis, or in fact a combination of hypotheses joined together,  involving at least a dozen parameters which can’t be predicted a priori but which have to be estimated from observations.

The required result for multiple hypotheses is pretty straightforward: the sum of the two alternatives involved in K above simply becomes a sum over all possible hypotheses, so that

P(H_i|DI) = K^{-1}P(H_i|I)P(D|H_iI),

and

K=P(D|I)=\sum P(H_j|I)P(D|H_jI)

If the hypothesis concerns the value of a parameter – in cosmology this might be, e.g., the mean density of the Universe expressed by the density parameter Ω0 – then the allowed space of possibilities is continuous. The sum in the denominator should then be replaced by an integral, but conceptually nothing changes. Our “best” hypothesis is the one that has the greatest posterior probability.

From a frequentist stance the procedure is often instead to just maximize the likelihood. According to this approach the best theory is the one that makes the data most probable. This can be the same as the most probable theory, but only if the prior probability is constant, but the probability of a model given the data is generally not the same as the probability of the data given the model. I’m amazed how many practising scientists make this error on a regular basis.

The following figure might serve to illustrate the difference between the frequentist and Bayesian approaches. In the former case, everything is done in “data space” using likelihoods, and in the other we work throughout with probabilities of hypotheses, i.e. we think in hypothesis space. I find it interesting to note that most theorists that I know who work in cosmology are Bayesians and most observers are frequentists!


As I mentioned above, it is the presence of the prior probability in the general formula that is the most controversial aspect of the Bayesian approach. The attitude of frequentists is often that this prior information is completely arbitrary or at least “model-dependent”. Being empirically-minded people, by and large, they prefer to think that measurements can be made and interpreted without reference to theory at all.

Assuming we can assign the prior probabilities in an appropriate way what emerges from the Bayesian framework is a consistent methodology for scientific progress. The scheme starts with the hardest part – theory creation. This requires human intervention, since we have no automatic procedure for dreaming up hypothesis from thin air. Once we have a set of hypotheses, we need data against which theories can be compared using their relative probabilities. The experimental testing of a theory can happen in many stages: the posterior probability obtained after one experiment can be fed in, as prior, into the next. The order of experiments does not matter. This all happens in an endless loop, as models are tested and refined by confrontation with experimental discoveries, and are forced to compete with new theoretical ideas. Often one particular theory emerges as most probable for a while, such as in particle physics where a “standard model” has been in existence for many years. But this does not make it absolutely right; it is just the best bet amongst the alternatives. Likewise, the Big Bang model does not represent the absolute truth, but is just the best available model in the face of the manifold relevant observations we now have concerning the Universe’s origin and evolution. The crucial point about this methodology is that it is inherently inductive: all the reasoning is carried out in “hypothesis space” rather than “observation space”.  The primary form of logic involved is not deduction but induction. Science is all about inverse reasoning.

Now, back to the anthropic principle. The point is that we can observe that life exists in our Universe and this observation must be incorporated as conditioning information whenever we try to make inferences about cosmological models if we are to reason consistently. In other words, the existence of life is a datum that must be incorporated in the conditioning information I mentioned above.

Suppose we have a model of the Universe M that contains various parameters which can be fixed by some form of observation. Let U be the proposition that these parameters take specific values U1, U2, and so on. Anthropic arguments revolve around the existence of life, so let L be the proposition that intelligent life evolves in the Universe. Note that the word “anthropic” implies specifically human life, but many versions of the argument do not necessarily accommodate anything more complicated than a virus.

Using Bayes’ theorem we can write

P(U|L,M)=K^{-1} P(U|M)P(L|U,M)

The dependence of the posterior probability P(U|L,M) on the likelihood P(L|U,M) demonstrates that the values of U for which P(L|U,M) is larger correspond to larger values of P(U|L,M); K is just a normalizing constant for the purpose of this argument. Since life is observed in our Universe the model-parameters which make life more probable must be preferred to those that make it less so. To go any further we need to say something about the likelihood and the prior. Here the complexity and scope of the model makes it virtually impossible to apply in detail the symmetry principles usually exploited to define priors for physical models. On the other hand, it seems reasonable to assume that the prior is broad rather than sharply peaked; if our prior knowledge of which universes are possible were so definite then we wouldn’t really be interested in knowing what observations could tell us. If now the likelihood is sharply peaked in U then this will be projected directly into the posterior distribution.

We have to assign the likelihood using our knowledge of how galaxies, stars and planets form, how planets are distributed in orbits around stars, what conditions are needed for life to evolve, and so on. There are certainly many gaps in this knowledge. Nevertheless if any one of the steps in this chain of knowledge requires very finely-tuned parameter choices then we can marginalize over the remaining steps and still end up with a sharp peak in the remaining likelihood and so also in the posterior probability. For example, there are plausible reasons for thinking that intelligent life has to be carbon-based, and therefore evolve on a planet. It is reasonable to infer, therefore, that P(U|L,M) should prefer some values of U. This means that there is a correlation between the propositions U and L in the sense that knowledge of one should, through Bayesian reasoning, enable us to make inferences about the other.

It is very difficult to make this kind of argument rigorously quantitative, but I can illustrate how the argument works with a simplified example. Let us suppose that the relevant parameters contained in the set U include such quantities as Newton’s gravitational constant G, the charge on the electron e, and the mass of the proton m. These are usually termed fundamental constants. The argument above indicates that there might be a connection between the existence of life and the value that these constants jointly take. Moreover, there is no reason why this kind of argument should not be used to find the values of fundamental constants in advance of their measurement. The ordering of experiment and theory is merely an historical accident; the process is cyclical. An illustration of this type of logic is furnished by the case of a plant whose seeds germinate only after prolonged rain. A newly-germinated (and intelligent) specimen could either observe dampness in the soil directly, or infer it using its own knowledge coupled with the observation of its own germination. This type, used properly, can be predictive and explanatory.

This argument is just one example of a number of its type, and it has clear (but limited) explanatory power. Indeed it represents a fruitful application of Bayesian reasoning. The question is how surprised we should be that the constants of nature are observed to have their particular values? That clearly requires a probability based answer. The smaller the probability of a specific joint set of values (given our prior knowledge) then the more surprised we should be to find them. But this surprise should be bounded in some way: the values have to lie somewhere in the space of possibilities. Our argument has not explained why life exists or even why the parameters take their values but it has elucidated the connection between two propositions. In doing so it has reduced the number of unexplained phenomena from two to one. But it still takes our existence as a starting point rather than trying to explain it from first principles.

Arguments of this type have been called Weak Anthropic Principle by Brandon Carter and I do not believe there is any reason for them to be at all controversial. They are simply Bayesian arguments that treat the existence of life as an observation about the Universe that is treated in Bayes’ theorem in the same way as all other relevant data and whatever other conditioning information we have. If more scientists knew about the inductive nature of their subject, then this type of logic would not have acquired the suspicious status that it currently has.

MADCOWS and Extreme Galaxy Clusters

Posted in The Universe and Stuff, Uncategorized with tags , , , on November 4, 2015 by telescoper

I thought I’d do a quick post just to have an excuse to post this very pretty picture I found in a press release from  JPL:

extreme cluster

This is a distant galaxy cluster found in the “Massive And Distance Clusters Of Wise Survey“, which is known by its acronym “MADCOWS”. Ho Ho Ho. If the previous link is inaccessible, because you don’t have a subscription, then don’t worry: the paper concerned is available for free on the arXiv. If the previous link isn’t inaccessible, because you do have a subscription, then do worry because you’re wasting your money…

Anyway the abstract of the paper, by Gonzalez et al., reads:

We present confirmation of the cluster MOO J1142+1527, a massive galaxy cluster discovered as part of the Massive and Distant Clusters of WISE Survey. The cluster is confirmed to lie at z = 1.19, and using the Combined Array for Research in Millimeter-wave Astronomy we robustly detect the Sunyaev–Zel’dovich (SZ) decrement at 13.2σ. The SZ data imply a mass of M200m = (1.1 ± 0.2) × 1015M, making MOO J1142+1527 the most massive galaxy cluster known at z > 1.15 and the second most massive cluster known at z > 1. For a standard ΛCDM cosmology it is further expected to be one of the ~5 most massive clusters expected to exist at z ≥ 1.19 over the entire sky. Our ongoing Spitzer program targeting ~1750 additional candidate clusters will identify comparably rich galaxy clusters over the full extragalactic sky.

I added the link to WISE, by the way.

This cluster is obviously an impressive object, and galaxy clusters are always “extreme” in the sense that they are defined to be particularly large concentrations of mass, but this one is actually in line with theoretical expectations for such objects. The following graph shows the spread of extreme cluster masses expected as a function of redshift:

If you mentally plot the mass and redshift of this beastie on the diagram you’ll see that it’s well within the comfort zone. As extreme objects go, this one is quite normal!

A Young Person’s Guide to Neutrino Physics

Posted in The Universe and Stuff with tags , , on October 28, 2015 by telescoper

I couldn’t resist sharing this charming video about neutrino physics. I don’t know who this Samantha is, but I think she’s a star!

Fracking, Gender, and the need for Open Science

Posted in Open Access, Politics, Science Politics, The Universe and Stuff with tags , , , , , , on October 24, 2015 by telescoper

I can’t resist commenting on some of the issues raised by Professor Averil MacDonald’s recent pronouncements about hydraulic fracturing (“fracking” for short). I know Averil MacDonald a little bit through SEPNet and through her work on gender issues in physics with the Institute of Physics and I therefore found some of her comments – e.g. that women “don’t understand fracking, which is why they don’t support it” – both surprising and disappointing. I was at first prepared to accept that she might have been misquoted or her words taken out of context. However she has subsequently said much the same thing in the Guardian and, worse, in an excruciating car crash of an interview on Channel 4 News. It seems that having lots of experience in gender equality matters is no barrier to indulging in simplistic generalisations; for a discussion of the poll which inspired the gender comments, and what one might or might not infer from it, see here. For the record, Professor MacDonald is Chair of UK Onshore Oil and Gas, an organization that represents and lobbies on behalf of the United Kingdom’s onshore oil and gas industry.

Before I go on I’ll briefly state my own position on fracking, which is basically agnostic. Of course, burning shale gas produces carbon dioxide, a greenhouse agent. I’m not agnostic about that.  What I mean is that I don’t know whether fracking is associated with an increased  risk of earthquakes or with water contamination. I don’t think there is enough reliable scientific literature in the public domain to form a rational conclusion on those questions. On the separate matter of whether there is enough shale gas to make a meaningful contribution to the UK’s energy needs I am rather less ambivalent – the balance of probability seems to me to suggest that fracking will never provide more than a sticking-plaster solution (if that) to a problem that which reach critical proportions very soon. Fracking seems to me to be a distraction; a long-term solution will have to be found elsewhere.

The central issue in the context of Averil MacDonald’s comments seems to me however to be the perception of the various risks associated with fracking that I have mentioned before, i.e. earth tremors, contaminated water supplies and other environmental dangers. I think it’s a perfectly rational point of view for a scientifically literate person to take to be concerned about such things and to oppose fracking unless and until evidence is supplied to allay those fears. Moreover, it may be true that most women don’t understand science but neither do most men. I suspect that goes for most of our politicians too. I’ve commented many times on what a danger it is to our democracy that science is so poorly understood among the general population but my point here is that the important thing about fracking is not whether men understand the science better than women, but that there’s too little real scientific evidence out there for anyone – male or female, scientifically literate or not – to come to a rational conclusion about it.

I’ve yet to see any meaningful attempt in the mainstream media on the actual science evidence involved when surely that’s the key to whether we should “get behind” fracking or not? It struck me that quite a few readers might also be interested in this issue to, so for them I’d recommend reading the Beddington Report. The problem with this report, however, is that it’s a high-level summary with no detailed scientific discussion. In my opinion it’s a very big problem that geologists and geophysics (and climate scientists for that matter) have not adopted the ideals of the growing open science movement. In particular, it is very difficult to find any proper scientific papers on fracking and issues associated with fracking that aren’t hidden behind a paywall. If working scientists find it difficult to access the literature how can we expect non-scientists to come to an informed conclusion?

Here’s an exception: a rare, peer-reviewed scientific article about hydraulic fracturing. The abstract of the paper reads:

The widespread use of hydraulic fracturing (HF) has raised concerns about potential upward migration of HF fluid and brine via induced fractures and faults. We developed a relationship that predicts maximum fracture height as a function of HF fluid volume. These predictions generally bound the vertical extent of microseismicity from over 12,000 HF stimulations across North America. All microseismic events were less than 600 m above well perforations, although most were much closer. Areas of shear displacement (including faults) estimated from microseismic data were comparatively small (radii on the order of 10 m or less). These findings suggest that fracture heights are limited by HF fluid volume regardless of whether the fluid interacts with faults. Direct hydraulic communication between tight formations and shallow groundwater via induced fractures and faults is not a realistic expectation based on the limitations on fracture height growth and potential fault slip.

However, it is important to realise that, as noted in the acknowledgements, the work on which this paper is based was funded by “Halliburton Energy Services, Inc., a company that is active in the hydraulic fracturing industry in sedimentary basins around the world”. And therein lies the rub. In the interest of balance here is a link to a blog post on fracking in the USA, the first paragraph of which reads:

For some time now, proponents of the controversial practice of hydraulic fracturing or “fracking” have claimed there was little or no evidence of real risk to groundwater. But as the classic saying goes: “the absence of evidence is not evidence of absence” of a problem. And the evidence that fracking can contaminate groundwater and drinking water wells is growing stronger with every new study.

I encourage you to read it, but if you do please carry on to the comments where you will see detailed counter-arguments. My point is not to say that one side is right and the other is wrong, but that there are scientists on both sides of the argument.

What I would like to see is a proper independent scientific study of the geological and geophysical risks related of hydraulic fracturing, subjected to proper peer review and publish on an open access platform along with all related data; by “independent”, I mean not funded by the shale gas industry. I’m not accusing any scientists of being in the pockets of the fracking lobby, but it may look like that to the general public. If  there is to be public trust such studies then they will have to be seen to be unbiased.

Anyway, in an attempt to gauge the attitude to fracking of my totally unrepresentative readership, I thought I’d relaunch the little poll I tried a  while ago:

And if you have strong opinions, please feel free to use the comments box.

Do Primordial Fluctuations have a Quantum Origin?

Posted in The Universe and Stuff with tags , , , , , , on October 21, 2015 by telescoper

A quick lunchtime post containing a confession and a question, both inspired by an interesting paper I found recently on the arXiv with the abstract:

We investigate the quantumness of primordial cosmological fluctuations and its detectability. The quantum discord of inflationary perturbations is calculated for an arbitrary splitting of the system, and shown to be very large on super-Hubble scales. This entails the presence of large quantum correlations, due to the entangled production of particles with opposite momentums during inflation. To determine how this is reflected at the observational level, we study whether quantum correlators can be reproduced by a non-discordant state, i.e. a state with vanishing discord that contains classical correlations only. We demonstrate that this can be done for the power spectrum, the price to pay being twofold: first, large errors in other two-point correlation functions, that cannot however be detected since hidden in the decaying mode; second, the presence of intrinsic non-Gaussianity the detectability of which remains to be determined but which could possibly rule out a non-discordant description of the Cosmic Microwave Background. If one abandons the idea that perturbations should be modeled by Quantum Mechanics and wants to use a classical stochastic formalism instead, we show that any two-point correlators on super-Hubble scales can exactly be reproduced regardless of the squeezing of the system. The later becomes important only for higher order correlation functions, that can be accurately reproduced only in the strong squeezing regime.

I won’t comment on the use of the word “quantumness” nor the plural “momentums”….

My confession is that I’ve never really followed the logic that connects the appearance of classical fluctuations to the quantum description of fields in models of the early Universe. People have pointed me to papers that claim to spell this out, but they all seem to miss the important business of what it means to “become classical” in the cosmological setting. My question, therefore, is can anyone please point me to a book or a paper that addresses this issue rigorously?

Please let me know through the comments box, which you can also use to comment on the paper itself…

Charles Ives & Albert Einstein: Parallel Lives

Posted in Music, The Universe and Stuff with tags , , , on October 20, 2015 by telescoper

I just noticed that today is the birthday of the great American modernist composer Charles Ives, who was born 141 years ago on this day. Some time ago I read The Life of Charles Ives by Stuart Feder, it’s a very interesting and informative biography of one of the strangest but most fascinating composers in the history of classical music so I thought I’d rehash an old piece I wrote about him to celebrate his birthday.

Charles Ives was by any standards a daring musical innovator. Some of his compositions involve atonal structures and some involve different parts of the orchestra playing in different time signatures. He also wrote strange and wonderful piano pieces, including some which involved re-tuning the piano to obtain scales involving quarter-tones. Among this maelstrom of modern ideas he also liked to add quotations from folk songs and old hymns which gives his work a paradoxically nostalgic tinge.

His pieces are often extremely diffficult to play (so I’m told) and sometimes not that easy to listen to, but while he’s often perplexing he can also be exhilarating and very moving. Other composers might play off two musical ideas against each other, but Ives would smash them together and to hell with the dissonance. I think the wholeheartedness of his eccentricity is wonderful, but I know that some people think he was just a nut.. You’ll have to make your own mind up on that.

My favourite quote of his can be found scrawled on a hand-written score which he sent to his copyist:

Please don’t try to make things nice! All the wrong notes are right. Just copy as I have – I want it that way.

But the point of adding this post to my blog was that in the course of reading the biography, it struck me that there is a strange parallel between the life of this controversial and not-too-well known composer and that of Albert Einstein who is certainly better known, especially to people reading what purports to be a physics blog.

For one thing their lifespans coincide pretty closely. Charles Ives was born in 1874 and died in 1954; Albert Einstein lived from 1879 to 1955. Of course the former was born in America and the latter in Germany. One inhabited the world of music and the other science; Ives, in fact, made his living in the insurance business and only composed in his spare time while Einstein spent most of his career in academia, after a brief period working in a patent office. Not everything Ives wrote was published professionally and he also rewrote things extensively, so it is difficult to establish exact dates for things, especially for a non-expert like me. In any case I don’t want to push things too far and try to argue that some spooky zeitgeist acted at a distance to summon the ideas from each of them in his own sphere. I just think it is curious to observe how similar their world lines were, at least in some respects.

We all know that Einstein’s “year of miracles” was 1905, during which he published classic papers on special relativity, Brownian motion and the photoelectric effect. What was arguably Ives’ greatest composition, The Unanswered Question, was completed in 1906 (although it was revised later). This piece is subtitled “A Cosmic Landscape” and it’s a sort of meditation on the philosophical problem of existence: the muted strings (which are often positioned offstage in concert performances) symbolize silence while the solo trumpet evokes the individual struggling to find meaning within the void. Here’s a fine recording of this work, featuring the New York Philharmonic conducted by Leonard Bernstein:

The Unanswered Question is probably Ives’ greatest masterpiece, but it wasn’t the only work he composed in 1906. A companion piece called Central Park in the Dark also dates from that year and they are sometimes performed together as a kind of diptych which offers interesting contrasts. While the former is static and rather abstract, the latter is dynamic and programmatic (in that it includes realistic evocations of night-time sounds).

Einstein’s next great triumph was his General Theory of Relativity in 1915, an extension of the special theory to include gravity and accelerated motion, which which came only after years of hard work learning the required difficult mathematics. Ives too was hard at work for the next decade which resulted in other high points, although they didn’t make him a household name like Einstein. The Fourth Symphony is an extraordinary work which even the best orchestras find extremely difficult to perform. Even better in my view is Three Places in New England (completed in 1914) , which contains my own favourite bit of Ives. The last movement, The Housatonic at Stockbridge is very typical of his unique approach, with a beautifully paraphrased hymn tune floating over the top of complex meandering string figures until the piece ends in a tumultuous crescendo.

After this period, both Einstein and Ives carried on working in their respective domains, and even with similar preoccupations. Einstein was in search of a unified field theory that could unite gravity with the other forces of nature, although the approach led him away from the mainstream of conventional physics research and his later years he became an increasingly marginal figure.

By about 1920 Ives had written five full symphonies (four numbered ones and one called the Holidays Symphony) but his ambition beyond these was perhaps just as grandiose as Einstein’s: to create a so-called “Universe Symphony” which he described (in typically bewildering fashion) as

A striving to present – to contemplate in tones rather than in music as such, that is – not exactly within the general term or meaning as it is so understood – to paint the creation, the mysterious beginnings of all things, known through God to man, to trace with tonal imprints the vastness, the spiritual eternities, from the great unknown to the great unknown.

I guess such an ambitious project – to create an entirely new language of “tones” that could give expression to timeless eternity, a kind of musical theory of everything – was doomed to failure. Although Ives was an experienced symphonic composer he couldn’t find a way to realise his vision. Only fragments of the Universe Symphony remain (although various attempts have been made by others to complete it).

In fact, the end of Ives’ creative career was much more sudden and final than Einstein who, although he never again reached the heights he had scaled in 1915 – who could? – remained a productive and respected scientist until his death. Ives had a somewhat melancholic disposition and from time to time suffered from depression. By 1918 he already felt that his creative flame was faltering, but by 1926 the spark was extinguished completely. His wife, appropriately named Harmony, remembered the precise day when this happened at their townhouse in New York:

He came downstairs one day with tears in his eyes, and said he couldn’t seem to compose anymore – nothing went well, nothing sounded right.

Although Charles Ives lived almost another thirty years he never composed another piece of music after that day in 1926. I find that unbearably sad, but at least a lot of his work is available and now fairly widely played. Alongside the pieces I have mentioned, there are literally hundreds of songs, some of which are exceptionally beautiful, and dozens of smaller works including piano and violin sonatas.

Although they both lived in the same part of America for many years, I don’t think Charles Ives and Albert Einstein ever met. I wonder what they would have made of each other if they had?

If you believe in the multiverse, of course, then there is a part of it in which they do meet. Einstein was an enthusiastic violinist so there will even be a parallel world in which Einstein is playing the Ives’ Violin Sonata on Youtube…

 

A celebration of Sir Fred Hoyle at the Royal Astronomical Society

Posted in Biographical, The Universe and Stuff with tags , , , on October 13, 2015 by telescoper

I had to miss this meeting – because I was involved in a special Senate meeting on Friday afternoon – but I did make it to the “famous RAS Dining Club” afterwards where I had a brief chat with the author of this post, Cormac O’ Raifeartaigh.

Here, for reference, is the Athenaeum, where we dined on Friday..

Athenaeum

cormac's avatarAntimatter

The birth centenary of the noted British astrophysicist Sir Fred Hoyle was celebrated on Friday at the Royal Astronomical Society with a one-day meeting of talks describing Sir Fred’s many contributions to 20th century physics. While he is chiefly remembered in some quarters as the physicist who was ‘wrong on the big bang’, Sir Fred in fact made a number of seminal contributions to modern physics in several fields. Indeed, it was a treat to witness former collaborators and students recall his contribution to stellar nucleosynthesis, accretion physics, stellar structure, astrobiology and cosmology, to name but a few.

I hadn’t been to the RAS before although I was elected a Fellow a few years ago, and I was stunned by its fantastic location in central London location. it is housed in the famous Burlington House on Piccadilly, sharing the premises and courtyard with the Linnean Society, the Geological Society

View original post 903 more words