Archive for November, 2015

To his love, by Ivor Gurney

Posted in Poetry with tags , , on November 11, 2015 by telescoper

He’s gone, and all our plans
Are useless indeed.
We’ll walk no more on Cotswolds
Where the sheep feed
Quietly and take no heed.

His body that was so quick
Is not as you
Knew it, on Severn River
Under the blue
Driving our small boat through.

You would not know him now…
But still he died
Nobly, so cover him over
With violets of pride
Purple from Severn side.

Cover him, cover him soon!
And with thick-set
Masses of memoried flowers-
Hide that red wet
Thing I must somehow forget.

by Ivor Gurney (1890-1937)

Ivor Gurney enlisted in the Gloucestershire Regiment of the British Army in 1915. He was seriously wounded in the shoulder in April 1917. He recovered and was soon sent back into battle. In September 1917, at Passchendaele, he was gassed and hospitalized again. He suffered a serious nervous breakdown in 1918 and spent much of the rest of his life in mental hospitals of various kinds. He died in 1937, of tuberculosis, in such an institution – the City of London Mental Hospital. He was a composer as well as a poet, and a short piece by him was played this morning on BBC Radio 3. I’m posting this poem today, Armistice Day, when we remember the fallen, as a reminder that the legacy of war can be brutal also for those that survive.

The bizarre naked man orchid

Posted in Uncategorized on November 10, 2015 by telescoper

Tired after a long afternoon on Senate, I lack the energy to do a proper blog post so I thought I’d just reblog this. I suppose it follows on from my Anthropic Principle item!

p.s. The word “Orchid” is derived from the Greek word for testicle. I just thought you would like to know that.

whyevolutionistrue's avatarWhy Evolution Is True

Let’s finish the week not with a cat, but a plant. This one, the “naked orchid” or “hanging naked man orchid,” is a real species, Orchis italica.

hanging-naked-man-orchid-1

There’s a reason they aren’t called the “naked hanging woman orchid”:

Orchis-italica-seeds-Pyramid-monkey-orchid-Italian-man-orchid-Home-Garden-Bonsai-Balcony-DIY-100-PCS

Don’t ask me the adaptive significance, if any, of this shape. Maybe there’s some insect that has a search image for men?

To see nine more bizarre flowers, many of them orchids, go here.

View original post

The Case for Science Spending

Posted in Politics, Science Politics with tags , , on November 9, 2015 by telescoper

Just a quick post with my Community Service hat on to draw your attention to the fact that the House of Commons Science and Technology Committee has issued a report “The Science Budget” (which is available to download as a PDF here). It makes a very strong case for increasing science spending to 3% of GDP, although suggests doing that gradually. I don’t agree with everything in it, actually, butit is good to see (in the 4th paragraph) an explicit acknowledgment of the absurdity of the current situation in which capital is given to build facilities but there is no resource available to run them (“Batteries Not Included”).

This document will hopefully help to persuade government that continued real-terms cuts in science spending make no sense whatsoever.

I’m taking the liberty of quoting the summary in full, but do read the full document. It’s very interesting.

–0–

The United Kingdom is a science superpower. In terms of both quality and productivity, our research base `punches above its weight’, setting a worldwide benchmark for excellence.

Government spending on the science base has been protected since 2010, with a flat-cash- ring-fenced budget for annual ‘resource’ spending distributed by the research councils, the Higher Education Funding Council and others. Annual ‘capital’ budgets have varied. The Government has already announced that capital spending within the science budget will be protected — in real terms — up to the end of 2021. The Government’s Spending Review on 25 November will determine the science — and innovation — budget allocations for the rest of this Parliament.

The UK has fallen behind its competitors in terms of total R&D investment and this will put UK competitiveness, productivity and high-value jobs at risk if it is not reversed. The Government should produce a long term ‘roadmap’ for increasing public and private sector science R&D investment in the UK to 3% of GDP — the EU target. This would send an important signal about the long term stability and sustainability of our science and innovation ecosystem, supercharging private sector R&D investment from industry, charities and overseas investors alike.

A more robust system is needed to integrate capital and resource funding allocations. The Government should urgently review existing capital allocations to ensure sufficient resource is in place to fully ‘sweat our assets’. Sufficient resource funding will only materialise, however, with an upward trajectory in the resource budget.

The Spending Review is being conducted under present accounting protocols, dealing with capital and resource budgets for science separately. ‘ESA-10’ accounting rules will in future count resource expenditure on R&D as capital, reflecting the fact that all expenditure on science research is an investment — an asset — in future economic capacity. The Government in the Spending Review should make it clear that this rules revision will not be used as a means to change the underlying funding settlement.

The ‘dual support’ system has produced a world class and highly efficient system for scientific research. Any significant changes to this system, including the balance of funding between research councils and university funding councils, would require a clear justification, which has yet to emerge. The Government should make clear its continued commitment to the dual support system, and the previous Government’s 2010 iteration of the Haldane Principle in the forthcoming Spending Review. A significant element of research funding should continue to be channelled though both the research councils and the higher education funding authorities. Clear justification will also be needed for any significant change in funding allocations between the research councils, and we caution against a radical reorganisation which could potentially harm the research programme.

Any expansion of the innovation catapult network should not come at the expense of other innovation priorities. The Government should focus on consolidating the existing catapults, to ensure that all will have the necessary operating resource and business strategies to operate at peak capacity. To show a clear commitment to innovation more generally, the Government should ring-fence Innovate UK’s budget.

The Government should also retain the current system of innovation grants — rather than loans — as a key policy tool, alongside R&D tax credits, for de-risking innovation investment.

The Spending Review will have a profound impact on our science base and our future prosperity. We have to get it right. We have a duty to take care that our spending and structural decisions in this area do more than merely maintain the status quo. If we get our spending priorities, our policies, regulatory frameworks or our immigration policy wrong, we will be on the wrong side of history. The Government must ensure that the UK remains a scientific superpower.

Aftermath, by Siegfried Sassoon

Posted in Poetry with tags , , , , on November 8, 2015 by telescoper

I think it is appropriate to post this poem by Siegfried Sassoon this Remembrance Sunday. I think it was composed sometime in 1919 and it appears in a collection published in 1920 with the same title as the poem. I think its message is clear, but it is also notable for its unusual metrical structure; it’s basically iambic but each line ends with a succession of three stressed syllables that causes the iambic rhythm to stumble. It’s a device used in classical Greek and Roman poetry to emphasize pain or discomfort on the one hand or struggle and determination on the other. Here it seems to convey both.

Have you forgotten yet?…
For the world’s events have rumbled on since those gagged days,
Like traffic checked while at the crossing of city-ways:
And the haunted gap in your mind has filled with thoughts that flow
Like clouds in the lit heaven of life; and you’re a man reprieved to go,
Taking your peaceful share of Time, with joy to spare.
But the past is just the same–and War’s a bloody game…
Have you forgotten yet?…
Look down, and swear by the slain of the War that you’ll never forget.

Do you remember the dark months you held the sector at Mametz–
The nights you watched and wired and dug and piled sandbags on parapets?
Do you remember the rats; and the stench
Of corpses rotting in front of the front-line trench–
And dawn coming, dirty-white, and chill with a hopeless rain?
Do you ever stop and ask, ‘Is it all going to happen again?’

Do you remember that hour of din before the attack–
And the anger, the blind compassion that seized and shook you then
As you peered at the doomed and haggard faces of your men?
Do you remember the stretcher-cases lurching back
With dying eyes and lolling heads–those ashen-grey
Masks of the lads who once were keen and kind and gay?

Have you forgotten yet?…
Look up, and swear by the green of the spring that you’ll never forget.

by Siegfried Sassoon (1896-1967)

Life as a Condition of Cosmology

Posted in The Universe and Stuff with tags , , , , , , , on November 7, 2015 by telescoper

Trigger Warnings: Bayesian Probability and the Anthropic Principle!

Once upon a time I was involved in setting up a cosmology conference in Valencia (Spain). The principal advantage of being among the organizers of such a meeting is that you get to invite yourself to give a talk and to choose the topic. On this particular occasion, I deliberately abused my privilege and put myself on the programme to talk about the “Anthropic Principle”. I doubt if there is any subject more likely to polarize a scientific audience than this. About half the participants present in the meeting stayed for my talk. The other half ran screaming from the room. Hence the trigger warnings on this post. Anyway, I noticed a tweet this morning from Jon Butterworth advertising a new blog post of his on the very same subject so I thought I’d while away a rainy November afternoon with a contribution of my own.

In case you weren’t already aware, the Anthropic Principle is the name given to a class of ideas arising from the suggestion that there is some connection between the material properties of the Universe as a whole and the presence of human life within it. The name was coined by Brandon Carter in 1974 as a corrective to the “Copernican Principle” that man does not occupy a special place in the Universe. A naïve application of this latter principle to cosmology might lead us to think that we could have evolved in any of the myriad possible Universes described by the system of Friedmann equations. The Anthropic Principle denies this, because life could not have evolved in all possible versions of the Big Bang model. There are however many different versions of this basic idea that have different logical structures and indeed different degrees of credibility. It is not really surprising to me that there is such a controversy about this particular issue, given that so few physicists and astronomers take time to study the logical structure of the subject, and this is the only way to assess the meaning and explanatory value of propositions like the Anthropic Principle. My former PhD supervisor, John Barrow (who is quoted in John Butterworth’s post) wrote the definite text on this topic together with Frank Tipler to which I refer you for more background. What I want to do here is to unpick this idea from a very specific perspective and show how it can be understood quite straightfowardly in terms of Bayesian reasoning. I’ll begin by outlining this form of inferential logic.

I’ll start with Bayes’ theorem which for three logical propositions (such as statements about the values of parameters in theory) A, B and C can be written in the form

P(B|AC) = K^{-1}P(B|C)P(A|BC) = K^{-1} P(AB|C)

where

K=P(A|C).

This is (or should be!)  uncontroversial as it is simply a result of the sum and product rules for combining probabilities. Notice, however, that I’ve not restricted it to two propositions A and B as is often done, but carried throughout an extra one (C). This is to emphasize the fact that, to a Bayesian, all probabilities are conditional on something; usually, in the context of data analysis this is a background theory that furnishes the framework within which measurements are interpreted. If you say this makes everything model-dependent, then I’d agree. But every interpretation of data in terms of parameters of a model is dependent on the model. It has to be. If you think it can be otherwise then I think you’re misguided.

In the equation,  P(B|C) is the probability of B being true, given that C is true . The information C need not be definitely known, but perhaps assumed for the sake of argument. The left-hand side of Bayes’ theorem denotes the probability of B given both A and C, and so on. The presence of C has not changed anything, but is just there as a reminder that it all depends on what is being assumed in the background. The equation states  a theorem that can be proved to be mathematically correct so it is – or should be – uncontroversial.

To a Bayesian, the entities A, B and C are logical propositions which can only be either true or false. The entities themselves are not blurred out, but we may have insufficient information to decide which of the two possibilities is correct. In this interpretation, P(A|C) represents the degree of belief that it is consistent to hold in the truth of A given the information C. Probability is therefore a generalization of the “normal” deductive logic expressed by Boolean algebra: the value “0” is associated with a proposition which is false and “1” denotes one that is true. Probability theory extends  this logic to the intermediate case where there is insufficient information to be certain about the status of the proposition.

A common objection to Bayesian probability is that it is somehow arbitrary or ill-defined. “Subjective” is the word that is often bandied about. This is only fair to the extent that different individuals may have access to different information and therefore assign different probabilities. Given different information C and C′ the probabilities P(A|C) and P(A|C′) will be different. On the other hand, the same precise rules for assigning and manipulating probabilities apply as before. Identical results should therefore be obtained whether these are applied by any person, or even a robot, so that part isn’t subjective at all.

In fact I’d go further. I think one of the great strengths of the Bayesian interpretation is precisely that it does depend on what information is assumed. This means that such information has to be stated explicitly. The essential assumptions behind a result can be – and, regrettably, often are – hidden in frequentist analyses. Being a Bayesian forces you to put all your cards on the table.

To a Bayesian, probabilities are always conditional on other assumed truths. There is no such thing as an absolute probability, hence my alteration of the form of Bayes’s theorem to represent this. A probability such as P(A) has no meaning to a Bayesian: there is always conditioning information. For example, if  I blithely assign a probability of 1/6 to each face of a dice, that assignment is actually conditional on me having no information to discriminate between the appearance of the faces, and no knowledge of the rolling trajectory that would allow me to make a prediction of its eventual resting position.

In tbe Bayesian framework, probability theory  becomes not a branch of experimental science but a branch of logic. Like any branch of mathematics it cannot be tested by experiment but only by the requirement that it be internally self-consistent. This brings me to what I think is one of the most important results of twentieth century mathematics, but which is unfortunately almost unknown in the scientific community. In 1946, Richard Cox derived the unique generalization of Boolean algebra under the assumption that such a logic must involve associated a single number with any logical proposition. The result he got is beautiful and anyone with any interest in science should make a point of reading his elegant argument. It turns out that the only way to construct a consistent logic of uncertainty incorporating this principle is by using the standard laws of probability. There is no other way to reason consistently in the face of uncertainty than probability theory. Accordingly, probability theory always applies when there is insufficient knowledge for deductive certainty. Probability is inductive logic.

This is not just a nice mathematical property. This kind of probability lies at the foundations of a consistent methodological framework that not only encapsulates many common-sense notions about how science works, but also puts at least some aspects of scientific reasoning on a rigorous quantitative footing. This is an important weapon that should be used more often in the battle against the creeping irrationalism one finds in society at large.

To see how the Bayesian approach provides a methodology for science, let us consider a simple example. Suppose we have a hypothesis H (some theoretical idea that we think might explain some experiment or observation). We also have access to some data D, and we also adopt some prior information I (which might be the results of other experiments and observations, or other working assumptions). What we want to know is how strongly the data D supports the hypothesis H given my background assumptions I. To keep it easy, we assume that the choice is between whether H is true or H is false. In the latter case, “not-H” or H′ (for short) is true. If our experiment is at all useful we can construct P(D|HI), the probability that the experiment would produce the data set D if both our hypothesis and the conditional information are true.

The probability P(D|HI) is called the likelihood; to construct it we need to have   some knowledge of the statistical errors produced by our measurement. Using Bayes’ theorem we can “invert” this likelihood to give P(H|DI), the probability that our hypothesis is true given the data and our assumptions. The result looks just like we had in the first two equations:

P(H|DI) = K^{-1}P(H|I)P(D|HI) .

Now we can expand the “normalising constant” K because we know that either H or H′ must be true. Thus

K=P(D|I)=P(H|I)P(D|HI)+P(H^{\prime}|I) P(D|H^{\prime}I)

The P(H|DI) on the left-hand side of the first expression is called the posterior probability; the right-hand side involves P(H|I), which is called the prior probability and the likelihood P(D|HI). The principal controversy surrounding Bayesian inductive reasoning involves the prior and how to define it, which is something I’ll comment on in a future post.

The Bayesian recipe for testing a hypothesis assigns a large posterior probability to a hypothesis for which the product of the prior probability and the likelihood is large. It can be generalized to the case where we want to pick the best of a set of competing hypothesis, say H1 …. Hn. Note that this need not be the set of all possible hypotheses, just those that we have thought about. We can only choose from what is available. The hypothesis may be relatively simple, such as that some particular parameter takes the value x, or they may be composite involving many parameters and/or assumptions. For instance, the Big Bang model of our universe is a very complicated hypothesis, or in fact a combination of hypotheses joined together,  involving at least a dozen parameters which can’t be predicted a priori but which have to be estimated from observations.

The required result for multiple hypotheses is pretty straightforward: the sum of the two alternatives involved in K above simply becomes a sum over all possible hypotheses, so that

P(H_i|DI) = K^{-1}P(H_i|I)P(D|H_iI),

and

K=P(D|I)=\sum P(H_j|I)P(D|H_jI)

If the hypothesis concerns the value of a parameter – in cosmology this might be, e.g., the mean density of the Universe expressed by the density parameter Ω0 – then the allowed space of possibilities is continuous. The sum in the denominator should then be replaced by an integral, but conceptually nothing changes. Our “best” hypothesis is the one that has the greatest posterior probability.

From a frequentist stance the procedure is often instead to just maximize the likelihood. According to this approach the best theory is the one that makes the data most probable. This can be the same as the most probable theory, but only if the prior probability is constant, but the probability of a model given the data is generally not the same as the probability of the data given the model. I’m amazed how many practising scientists make this error on a regular basis.

The following figure might serve to illustrate the difference between the frequentist and Bayesian approaches. In the former case, everything is done in “data space” using likelihoods, and in the other we work throughout with probabilities of hypotheses, i.e. we think in hypothesis space. I find it interesting to note that most theorists that I know who work in cosmology are Bayesians and most observers are frequentists!


As I mentioned above, it is the presence of the prior probability in the general formula that is the most controversial aspect of the Bayesian approach. The attitude of frequentists is often that this prior information is completely arbitrary or at least “model-dependent”. Being empirically-minded people, by and large, they prefer to think that measurements can be made and interpreted without reference to theory at all.

Assuming we can assign the prior probabilities in an appropriate way what emerges from the Bayesian framework is a consistent methodology for scientific progress. The scheme starts with the hardest part – theory creation. This requires human intervention, since we have no automatic procedure for dreaming up hypothesis from thin air. Once we have a set of hypotheses, we need data against which theories can be compared using their relative probabilities. The experimental testing of a theory can happen in many stages: the posterior probability obtained after one experiment can be fed in, as prior, into the next. The order of experiments does not matter. This all happens in an endless loop, as models are tested and refined by confrontation with experimental discoveries, and are forced to compete with new theoretical ideas. Often one particular theory emerges as most probable for a while, such as in particle physics where a “standard model” has been in existence for many years. But this does not make it absolutely right; it is just the best bet amongst the alternatives. Likewise, the Big Bang model does not represent the absolute truth, but is just the best available model in the face of the manifold relevant observations we now have concerning the Universe’s origin and evolution. The crucial point about this methodology is that it is inherently inductive: all the reasoning is carried out in “hypothesis space” rather than “observation space”.  The primary form of logic involved is not deduction but induction. Science is all about inverse reasoning.

Now, back to the anthropic principle. The point is that we can observe that life exists in our Universe and this observation must be incorporated as conditioning information whenever we try to make inferences about cosmological models if we are to reason consistently. In other words, the existence of life is a datum that must be incorporated in the conditioning information I mentioned above.

Suppose we have a model of the Universe M that contains various parameters which can be fixed by some form of observation. Let U be the proposition that these parameters take specific values U1, U2, and so on. Anthropic arguments revolve around the existence of life, so let L be the proposition that intelligent life evolves in the Universe. Note that the word “anthropic” implies specifically human life, but many versions of the argument do not necessarily accommodate anything more complicated than a virus.

Using Bayes’ theorem we can write

P(U|L,M)=K^{-1} P(U|M)P(L|U,M)

The dependence of the posterior probability P(U|L,M) on the likelihood P(L|U,M) demonstrates that the values of U for which P(L|U,M) is larger correspond to larger values of P(U|L,M); K is just a normalizing constant for the purpose of this argument. Since life is observed in our Universe the model-parameters which make life more probable must be preferred to those that make it less so. To go any further we need to say something about the likelihood and the prior. Here the complexity and scope of the model makes it virtually impossible to apply in detail the symmetry principles usually exploited to define priors for physical models. On the other hand, it seems reasonable to assume that the prior is broad rather than sharply peaked; if our prior knowledge of which universes are possible were so definite then we wouldn’t really be interested in knowing what observations could tell us. If now the likelihood is sharply peaked in U then this will be projected directly into the posterior distribution.

We have to assign the likelihood using our knowledge of how galaxies, stars and planets form, how planets are distributed in orbits around stars, what conditions are needed for life to evolve, and so on. There are certainly many gaps in this knowledge. Nevertheless if any one of the steps in this chain of knowledge requires very finely-tuned parameter choices then we can marginalize over the remaining steps and still end up with a sharp peak in the remaining likelihood and so also in the posterior probability. For example, there are plausible reasons for thinking that intelligent life has to be carbon-based, and therefore evolve on a planet. It is reasonable to infer, therefore, that P(U|L,M) should prefer some values of U. This means that there is a correlation between the propositions U and L in the sense that knowledge of one should, through Bayesian reasoning, enable us to make inferences about the other.

It is very difficult to make this kind of argument rigorously quantitative, but I can illustrate how the argument works with a simplified example. Let us suppose that the relevant parameters contained in the set U include such quantities as Newton’s gravitational constant G, the charge on the electron e, and the mass of the proton m. These are usually termed fundamental constants. The argument above indicates that there might be a connection between the existence of life and the value that these constants jointly take. Moreover, there is no reason why this kind of argument should not be used to find the values of fundamental constants in advance of their measurement. The ordering of experiment and theory is merely an historical accident; the process is cyclical. An illustration of this type of logic is furnished by the case of a plant whose seeds germinate only after prolonged rain. A newly-germinated (and intelligent) specimen could either observe dampness in the soil directly, or infer it using its own knowledge coupled with the observation of its own germination. This type, used properly, can be predictive and explanatory.

This argument is just one example of a number of its type, and it has clear (but limited) explanatory power. Indeed it represents a fruitful application of Bayesian reasoning. The question is how surprised we should be that the constants of nature are observed to have their particular values? That clearly requires a probability based answer. The smaller the probability of a specific joint set of values (given our prior knowledge) then the more surprised we should be to find them. But this surprise should be bounded in some way: the values have to lie somewhere in the space of possibilities. Our argument has not explained why life exists or even why the parameters take their values but it has elucidated the connection between two propositions. In doing so it has reduced the number of unexplained phenomena from two to one. But it still takes our existence as a starting point rather than trying to explain it from first principles.

Arguments of this type have been called Weak Anthropic Principle by Brandon Carter and I do not believe there is any reason for them to be at all controversial. They are simply Bayesian arguments that treat the existence of life as an observation about the Universe that is treated in Bayes’ theorem in the same way as all other relevant data and whatever other conditioning information we have. If more scientists knew about the inductive nature of their subject, then this type of logic would not have acquired the suspicious status that it currently has.

On Treasure Island

Posted in Jazz with tags , , , on November 6, 2015 by telescoper

After a long and very trying week I thought I’d sign off for the weekend with a lovely old bit of jazz. This is what I think was Humphrey Lyttelton’s band, vintage 1950, playing a tune, On Treasure Island, that Humph almost certainly got off a copy of the gorgeous record Louis Armstrong made of this song in the 1930s, although the Lyttelton version is very different in tempo and character.

The front line of this incarnation of the Lyttelton band was the best ever: Humphrey Lyttelton himself on trumpet, Wally Fawkes on clarinet and Keith Christie on trombone. The ensemble playing after Humph’s trumpet solo, from about 1.47, is an absolutely fantastic polyphonic blend of three great soloists. Enjoy!

The Higher Education Green Paper – Expert Commentary

Posted in Education with tags , , , , on November 6, 2015 by telescoper

Hot news in Higher Education today is that the long-awaited Higher Education Green Paper is now published. A summary of this discussion document which is called Fulfilling our Potential: Teaching Excellence, Social Mobility and Student Choice can be found here. I haven’t got time to provide a detailed response this morning, so I will defer to an acknowledged expert on the subject of “fulfilling potential”, Dylan Moran:

Enough of the Academic Publishing Racket!

Posted in Open Access with tags , , on November 5, 2015 by telescoper

There have been some interesting developments this week in the field of academic publishing. A particularly interesting story concernes the resignation of the entire editorial board of the linguistics journal Lingua, which is published by – (no prizes for guessing) – Elsevier. Not surprisingly this move was made in protest at Elsevier’s overpricing of “Open Access” options on its journal. Even less surprisingly, Elsevier’s response was considerably economical with the truth. Elsevier claims that it needs to levy large Article Processing Charges (APCs) to ensure their Open Access publications are economically viable. However, what Elsevier means by “economically viable” apparently means a profit margin of 37% or more, all plundered from the tightly constrained budgets of academic research organizations. In fact these APCs have nothing to do with the actual cost of publishing research papers. In any other context the behaviour of publishers like Elsevier would be called racketeering, i.e.

Racketeering, often associated with organized crime, is the act of offering of a dishonest service (a “racket”) to solve a problem that wouldn’t otherwise exist without the enterprise offering the service.

Let me remind you of the business model that underpins the academic publishing industry.  We academics write papers based on our research, which we then submit to journals. Other academics referee these papers, suggest corrections or improvements and recommend acceptance or rejection. Another set of academics provide oversight of this editorial process and make decisions on whether or not to publish. All of this is usually done for free. We academics then buy back the  product of our labours at an grossly inflated price through journal subscriptions, unless the article is published in Open Access form in which case we have to pay an APC up front to the publisher. It’s like having to take all the ingredients of a meal to a restaurant, cooking them yourself, and then being required to pay for the privilege of eating the resulting food.

Why do we continue to participate in such a palpably  ridiculous system? Isn’t it obvious that we (I mean academics in universities) are spending a huge amout of time and money achieving nothing apart from lining the pockets of these exploitative publishers? Is it simply vanity? I suspect that many academics see research papers less as a means of disseminating research and more as badges of status…

I’d say that, at least in my discipline, traditional journals are simply no longer necessary for communicating scientific research. I find all the  papers I need to do my research on the arXiv and most of my colleagues do the same. We simply don’t need old-fashioned journals anymore.  Yet we keep paying for them. It’s time for those of us who believe that  we should spend as much of our funding as we can on research instead of throwing it away on expensive and outdated methods of publication to put an end to this absurd system. We academics need to get the academic publishing industry off our backs.

All we need to do is to is dispense with the old model of a journal and replace it with a reliable and efficient reviewing system that interfaces with the arXiv. Then we would have a genuinely useful at a fraction of the cost of a journal subscription . That was the motivation behind the Open Journal of Astrophysics , a project that I and a group of like-minded individuals will be launching very soon. There will be a series of announcements here and elsewhere over the next few weeks, giving more details about the Open Journal and how it works.

We will be starting in a modest way but I hope that those who believe – as I do – in the spirit of open science and the free flow of scientific ideas will support this initiative. I hope that the Lingua debacle is a sign that change is on the way, but we need the help and participation of researchers to make the revolution happen.

Campaigners warn on Guy Fawkes night pogonophobia

Posted in Beards, History on November 5, 2015 by telescoper

No beards on your bonfire please!!

kmflett's avatarKmflett's Blog

Beard Liberation Front

PRESS RELEASE           2nd November

Contact Keith Flett     07803 167266

CAMPAIGNERS WARN ON GUY FAWKES BONFIRE NIGHT POGONOPHOBIA

guy

The Beard Liberation Front, the informal network of beard wearers, has warned of Guy Fawkes pogonophobia as bonfires around the country burn effigies of a hirsute man on Thursday evening and the following weekend.

Pogonophobia is the ancient Greek for an irrational fear or hatred of facial hair, known as beardism in modern English.

The BLF says that November 5th is the traditional highlight of the pogonophobes year as they burn an effigy of what they assume to be a dangerous radical figure with a beard, although few will openly discuss their often deep seated concerns about beard wearers

BLF Organiser Keith Flett said the irony is that Guy Fawkes was a deeply reactionary character who, had he lived now, would almost certainly not have had a beard under any…

View original post 32 more words

MADCOWS and Extreme Galaxy Clusters

Posted in The Universe and Stuff, Uncategorized with tags , , , on November 4, 2015 by telescoper

I thought I’d do a quick post just to have an excuse to post this very pretty picture I found in a press release from  JPL:

extreme cluster

This is a distant galaxy cluster found in the “Massive And Distance Clusters Of Wise Survey“, which is known by its acronym “MADCOWS”. Ho Ho Ho. If the previous link is inaccessible, because you don’t have a subscription, then don’t worry: the paper concerned is available for free on the arXiv. If the previous link isn’t inaccessible, because you do have a subscription, then do worry because you’re wasting your money…

Anyway the abstract of the paper, by Gonzalez et al., reads:

We present confirmation of the cluster MOO J1142+1527, a massive galaxy cluster discovered as part of the Massive and Distant Clusters of WISE Survey. The cluster is confirmed to lie at z = 1.19, and using the Combined Array for Research in Millimeter-wave Astronomy we robustly detect the Sunyaev–Zel’dovich (SZ) decrement at 13.2σ. The SZ data imply a mass of M200m = (1.1 ± 0.2) × 1015M, making MOO J1142+1527 the most massive galaxy cluster known at z > 1.15 and the second most massive cluster known at z > 1. For a standard ΛCDM cosmology it is further expected to be one of the ~5 most massive clusters expected to exist at z ≥ 1.19 over the entire sky. Our ongoing Spitzer program targeting ~1750 additional candidate clusters will identify comparably rich galaxy clusters over the full extragalactic sky.

I added the link to WISE, by the way.

This cluster is obviously an impressive object, and galaxy clusters are always “extreme” in the sense that they are defined to be particularly large concentrations of mass, but this one is actually in line with theoretical expectations for such objects. The following graph shows the spread of extreme cluster masses expected as a function of redshift:

If you mentally plot the mass and redshift of this beastie on the diagram you’ll see that it’s well within the comfort zone. As extreme objects go, this one is quite normal!