Archive for the The Universe and Stuff Category

Publish or be Damned

Posted in Science Politics, The Universe and Stuff with tags , , , , , , , , , on August 23, 2010 by telescoper

For tonight’s post I thought I’d compose a commentary on a couple of connected controversies suggested by an interestingly provocative piece by Nigel Hawkes in the Independent this weekend entitled Peer Review journals aren’t worth the paper they’re written on. Here is an excerpt:

The truth is that peer review is largely hokum. What happens if a peer-reviewed journal rejects a paper? It gets sent to another peer-reviewed journal a bit further down the pecking order, which is happy to publish it. Peer review seldom detects fraud, or even mistakes. It is biased against women and against less famous institutions. Its benefits are statistically insignificant and its risks – academic log-rolling, suppression of unfashionable ideas, and the irresistible opportunity to put a spoke in a rival’s wheel – are seldom examined.

In contrast to many of my academic colleagues I largely agree with Nigel Hawkes, but I urge you to read the piece yourself to see whether you are convinced by his argument.

I’m not actually convinced that peer review is as biased as Hawkes asserts. I rather think that the strongest argument against  the scientific journal establishment  is the ruthless racketeering of the academic publishers that profit from it.  Still, I do think he has a point. Scientists who garner esteem and influence in the public domain through their work should be required to defend it our in the open to both scientists and non-scientists alike. I’m not saying that’s easy to do in the face of ill-informed or even illiterate criticism, but it is in my view a necessary price to pay, especially when the research is funded by the taxpayer.

It’s not that I think many scientists are involved in sinister activities, manipulating their data and fiddling their results behind closed doors, but that as long as there is an aura of secrecy it will always fuel the conspiracy theories on which the enemies of reason thrive. We often hear the accusation that scientists behave as if they are priests. I don’t think they do, but there are certainly aspects of scientific practice that make it appear that way, and the closed world of academic publishing is one of the things that desperately needs to be opened up.

For a start, I think we scientists should forget academic journals and peer review, and publish our results directly in open access repositories. In the old days journals were necessary to communicate scientific work. Peer review guaranteed a certain level of quality. But nowadays it is unnecessary. Good work will achieve visibility through the attention others give it. Likewise open scrutiny will be a far more effective way of identifying errors than the existing referee process. Some steps will have to be taken to prevent abuse of the access to databases and even then I suspect a great deal of crank papers will make it through. But in the long run, I strongly believe this is the only way that science can develop in the age of digital democracy.

But scrapping the journals is only part of the story. I’d also argue that all scientists undertaking publically funded research should be required to put their raw data in the public domain too. I would allow a short proprietary period after the experiments, observations or whatever form of data collection is involved. I can also see that ethical issues may require certain data to be witheld, such as the names of subjects in medical trials. Issues will also arise when research is funded commercially rather than by the taxpaper. However, I still maintain that full disclosure of all raw data should be the rule rather than the exception. After all, if it’s research that’s funded by the public, it is really the public that owns the data anyway.

In astronomy this is pretty much the way things operate nowadays, in fact. Maybe stargazers have a more romantic way of thinking about scientific progress than their more earthly counterparts, but it is quite normal – even obligatory for certain publically funded projects – for surveys to release all their data. I used to think that it was enough just to publish the final results, but I’ve become so distrustful of the abuse of statistics throughout the field that I think it is necessary for independent scientists to check every step of the analysis of every major result. In the past it was simply too difficult to publish large catalogues in a form that anyone could use, but nowadays that is simply no longer the case. Astronomers have embraced this reality, and it is liberated them.

To give a good example of the benefits of this approach, take the Wilkinson Microwave Anisotropy Probe (WMAP) which released full data sets after one, three, five and seven years of operation. Scores of groups around the world have done their best to find glitches in the data and errors in the analysis without turning up anything particularly significant. The standing of the WMAP team is all the higher for having done this, although I don’t know whether they would have chosen to had they not been required to do so under the terms of their funding!

In the world of astronomy research it’s not at all unusual to find data for the object or set of objects you’re interested in from a public database, or by politely asking another team if they wouldn’t mind sharing their results. And if you happen to come across a puzzling result you suspect might be erroneous and want to check the calculations, you just ask the author for the numbers and, generally speaking, they send the numbers to you. A disagreement may ensue about who is right and who is wrong, but that’s the way science is supposed to work.  Everything must be open to question. It’s often a chaotic process, but it’s a process all the same, and it is one that has servedus incredibly well.

I was quite surprised recently to learn that this isn’t the way other scientific disciplines operate at all. When I challenged the statistical analysis in a paper on neuroscience recently, my request to have a look at the data myself was greeted with a frosty refusal. The authors seemed to take it as a personal affront that anyone might have the nerve to question their study. I had no alternative but to go public with my doubts, and my concerns have never been satisfactorily answered. How many other examples are there wherein application of the scientific method has come to a grinding halt because of compulsive secrecy? Nobody likes to have their failings exposed in public, and I’m sure no scientists likes see an error pointed out, but surely it’s better to be seen to have made an error than to maintain a front that perpetuates the suspicion of malpractice?

Another, more topical, example concerns the University of East Anglia’s Climatic Research Unit which was involved in the Climategate scandal and which has apparently now decided that it wants to share its data. Fine, but I find it absolutely amazing that such centres have been able to get away with being so secretive in the past. Their behaviour was guaranteed to lead to suspicions that they had something to hide. The public debate about climate change may be noisy and generally ill-informed but it’s a debate we must have out in the open.

I’m not going to get all sanctimonious about `pure’ science nor am I going to question the motives of  individuals working in disciplines I know very little about. I would, however, say that from the outside it certainly appears that there is often a lot more going on in the world of academic research than the simple quest for knowledge.

Of course there are risks in opening up the operation of science in the way I’m suggesting. Cranks will probably proliferate, but we’ll no doubt get used to them- I’m a cosmologist and I’m pretty much used to them already! Some good work may find it a bit harder to be recognized. Lack of peer review may mean more erroneous results see the light of day. Empire-builders won’t like it much either, as a truly open system of publication will be a great leveller of reputations. But in the final analysis, the risk of sticking to our arcane practices is far higher. Public distrust will grow and centuries of progress may be swept aside on a wave of irrationality. If the price for avoiding that is to change our attitude to who owns our data, then it’s a price well worth paying.


Share/Bookmark

Nicola Cabibbo (1935-2010)

Posted in The Universe and Stuff with tags , , , , on August 16, 2010 by telescoper

Just a short post to convey the very sad news that the great Italian physicist Nicola Cabibbo passed away today at the age of 75. I know I’m not alone in thinking that he should have received a share of the Nobel prize in 2008, which was awarded to Yoichiro Nambu (half the prize) and the other half was split between Makoto Kobayashi and Toshihide Maskawa.

As I wrote in 2008:

All three are extremely distinguished physicists and their contributions certainly deserve to be rewarded. But, in the case of Kobayashi and Maskawa, the Nobel Foundation has made a startling omission that I really can’t understand at all and which even threatens to undermine the prestige of the prize itself.The work for which these two were given half the Nobel Prize this year relates to the broken symmetry displayed by weak interactions between quarks. We now know that there are three generations of quarks, each containing quarks of two different flavours. The first generation contains the up (u) and the down (d), the second the strange (s) and the charmed (c), and the third has the bottom (b) and the top (t). OK, so the names are daft, but physicists have never been good at names.

The world of quarks is different to penetrate becauses quarks interact via the strong force which binds them close together into hadrons which are either baryons (three quarks) or mesons (a quark and an anti-quark).

But there are other kinds of particles too, the leptons. These are also arranged in three generations but each of these families contains a charged particle and a neutrino. The first generation is an electron and a neutrino, the second a muon and its neutrino, and the third has the tau and another neutrino. One might think that the three quark generations and the three lepton generations might have some deep equivalence between them, but leptons aren’t quarks so can’t interact at all by the strong interaction. Quarks and leptons can both interact via the weak interaction (the force responsible for radioactive beta-decay).

Weak interactions between leptons conserve generation, so the total number of particles of electron type is never changed (ignoring neutrino oscillations, which have only relatively recently been discovered). It seemed natural to assume that weak interactions between quarks should do the same thing, forbidding processes that hop between generations. Unfortunately, however, this is not the case. There are weak interactions that appear to convert u and/or d quarks into c and/or s quarks, but these seem to be relatively feeble compared to interactions within a generation, which seem to happen with about the same strength for quarks as they do for leptons. This all suggests that there is some sort of symmetry lurking somewhere in there, but it’s not quite what one might have anticipated.

The explanation of this was proposed by Nicola Cabibbo who, using a model in which there are only two quark generations, developed the idea that states of pure quark flavour (“u” or “d”, say) are not really what the weak interaction “sees”. In other words, the quark flavour states are not proper eigenstates of the weak interaction. All that is needed is to imagine that the required eigenstates are a linear combination of the flavour states and, Bob’s your uncle, quark generation needn’t be conserved. This phenomenon is called Quark Mixing. What makes it simple for only two generations is that it can be described entirely by one number: the Cabibbo angle, which measures how much the quark flavour basis is misaligned with the weak interaction basis. The angle is small so the symmetry is only slightly broken.

Kobayashi and Maskawa generalized the work of Cabibbo to the case of three quark generations. That’s actually quite a substantial task as the description of mixing in this case requires not just a single number but a 3×3 matrix each of whose entries is complex. This matrix is universally called the Cabibbo-Kobayashi-Maskawa (CKM) matrix and it now crops up all over the standard model of particle physics.

And there’s the rub. Why on Earth was Cabibbo not awarded a share of this year’s prize? I was shocked and saddened to find out that he’d been passed over despite the fact that his work so obviously led the way. I can think of no reason why he was omitted. It’s outrageous. I even feel sorry for Kobayashi and Maskawa, because there is certain to be such an outcry about this gaffe that it may detract from their success.

But really

I hope, however,  that controversy doesn’t intrude too much on what I hope will be the forthcoming celebration of Cabibbo’s immense contributions to particle physics. I’ll leave it to the experts to write more detailed appreciations that do better justice to his achievements. I’ll just say that I only met him once in real life, but found him charmingly modest and altogether quite delightful company. He will be greatly missed.


Share/Bookmark

The Next Decade of Astronomy?

Posted in Science Politics, The Universe and Stuff with tags , , , , , , , on August 14, 2010 by telescoper

I feel obliged to pass on the news that the results of the Decadal Review of US Astronomy were announced yesterday. There has already been a considerable amount of reaction to what the Review Panel (chaired by the esteemed Roger Blandford) came up with from people much more knowledgeable about observational astronomy and indeed US Science Politics, so I won’t try to do a comprehensive analysis here. I draw your attention instead to the report itself  (which you can download in PDF form for free)  and Julianne Dalcanton’s review of, and comments on, the Panel’s conclusions about the priorities for  space-based and ground-based astronomy for the next decade or so over on Cosmic Variance.  There’s also a piece by Andy Lawrence over on The e-Astronomer’s blog. I’ll just mention that Top of the Pops for space-based astronomy is the Wide-Field Infrared Survey Telescope (WFIRST) which you can read a bit more about here, and King of the Castle for the ground-based programme is the Large Synoptic Survey Telescope (LSST). Both of these hold great promise for the area I work in – cosmology and extragalactic astrophysics – so I’m pleased to see our American cousins placing such a high priority on them. The Laser Interferometer Space Antenna (LISA), which is designed to detect gravitational waves, also did very well, which is great news for Cardiff’s Gravitational Physics group.

It will be interesting to see what effect – if any – these priorities have on the ranking of corresponding projects this side of the Atlantic. Some of the space missions involved in the Decadal Review in fact depend on both NASA and ESA so there clearly will be a big effect on such cases. For example, the proposed International X-ray Observatory (IXO) did less well than many might have anticipated, with clear implications for  Europe (including the UK).  The current landscape  of X-ray astronomy is dominated by Chandra and XMM, both of which were launched in 1999 and which are both nearing the end of their operational lives. Since X-ray astronomy can only be done from space, abandoning IXO would basically mean the end of the subject  as we know it, but the question is how to bridge the  the gap between the end of these two missions and the start of IXO even if it does go ahead but not until long after 2020? Should we keep X-ray astronomers on the payroll twiddling their thumbs for the next decade when other fields are desperately short of manpower for science exploitation?

On a more general level, it’s not obvious how we should react when the US gives a high priority to a given mission anyway. Of course, it gives us confidence that we’re not being silly when very smart people across the Pond endorse missions and facilities similar to ones we are considering over here. However, generally speaking the Americans tend to be able to bring missions from the drawing board to completion much faster than we can in Europe. Just compare WMAP with Planck, for instance. Trying to compete with the US, rather than collaborate, seems likely to ensure only that we remain second best. There’s an argument, therefore, for Europe having a programme that is, in some respects at least, orthogonal to the United States; in matters where we don’t collaborate, we should go for facilities that complement rather than compete with those the Americans are building.

It’s all very well talking of priorities in the UK but we all know that the Grim Reaper is shortly going to be paying a visit to the budget of the  agency that administers funding for our astronomy, STFC. This organization went through a financial crisis all of its very own in 2007 from which it is still reeling. Now it has to face the prospect of further savage cuts. The level of “savings” being discussed  – at least 25%  -means that the STFC management must be pondering some pretty drastic measures, even pulling out of the European Southern Observatory (which we only joined in 2002). The trouble is that most of the other ground-based astronomical facilities used by UK astronomers have been earmarked for closure, or STFC has withdrawn from them. Britain’s long history of excellence in ground-based astronomy now hangs in the balance. It’s scary.

I hope the government can be persuaded that STFC should be spared another big cut and I’m sure that there’s extensive lobbying going on.  Indeed, STFC has already requested input to its plans for the ongoing Comprehensive Spending Review (CSR). With this in mind, the Royal Astronomical Society has produced a new booklet designed to point out the  relevance of astronomy to wider society. However I can’t rid from my mind the memory a certain meeting in London in 2007 at which the STFC Chief Executive revealed the true scale of STFC’s problems. He predicted that things would be much worse at the next CSR, i.e. this one. And that was before the Credit Crunch, and the consequent arrival of a new government swinging a very large axe. I wish I could be optimistic but, frankly, I’m not.

When the CSR is completed then STFC will have yet again to do another hasty re-prioritisation. Their Science Board has clearly been preparing:

… Science Board discussed a number of thought provoking scenarios designed to explore the sort of issues that the Executive may be confronted with if there were to be a significant funding reduction as a result of the 2010 comprehensive spending review settlement. As a result of these deliberations Science Board provided the Executive with guidance on how to take forward this strategic planning.

This illustrates a big difference in the way such prioritisation exercises are carried out in the UK versus the USA. The Decadal Review described above is a high-profile study, carried out by a panel of distinguished experts, which takes detailed input from a large number of scientists, and which delivers a coherent long-term vision for the future of the subject. I’m sure not everyone agrees with their conclusions, but the vast majority respect its impartiality and level-headedness and have confidence in the overall process. Here in the UK we have “consultation exercises” involving “advisory panels” who draw up detailed advice which then gets fed into STFC’s internal panels. That bit is much like the Decadal Review. However, at least in the case of the last prioritisation exercise, the community input doesn’t seem to bear any obvious relationship to what comes out the other end. I appreciate that there are probably more constraints on STFC’s Science Board than it has degrees of freedom, but there’s no getting away from the sense of alienation and cynicism this has generated across large sections of the UK astronomy community.

The problem with our is that we always seem to be reacting to financial pressure rather than taking the truly long-term “blue-skies” view that is clearly needed for big science projects of the type under discussion. The Decadal Review, for example, places great importance on striking a balance between large- and small-scale experiments. Here we tend slash the latter because they’re easier to kill than the former. If this policy goes on much longer, in the long run we’ll end up a with few enormous expensive facilities but none of the truly excellent science that can be done from using smaller kit.  A crucial aspect of this that that science seems to have been steadily relegated in importance in favour of technology ever since the creation of STFC.  This must be reversed. We need a proper strategic advisory panel with strong scientific credentials that stands outside the existing STFC structure but which has real influence on STFC planning, i.e. one which plays the same role in the UK as the Decadal Review does in the States.

Assuming, of course, that there’s any UK astronomy left in the next decade…

The Fractal Universe, Part 1

Posted in The Universe and Stuff with tags , , , , on August 4, 2010 by telescoper

A long time ago I blogged about the Cosmic Web and one of the comments there suggested I write something about the idea that the large-scale structure of the Universe might be some sort of fractal.  There’s a small (but vocal) group of cosmologists who favour fractal cosmological models over the more orthodox cosmology favoured by the majority, so it’s definitely something worth writing about. I have been meaning to post something about it for some time now, but it’s too big and technical a matter to cover in one item. I’ve therefore decided to start by posting a slightly edited version of a short News and Views piece I wrote about the  question in 1998. It’s very out of date on the observational side, but I thought it would be good to set the scene for later developments (mentioned in the last paragraph), which I hope to cover in future posts.

—0—

One of the central tenets of cosmological orthodoxy is the Cosmological Principle, which states that, in a broad-brush sense, the Universe is the same in every place and in every direction. This assumption has enabled cosmologists to obtain relatively simple solutions of Einstein’s General Theory of Relativity that describe the dynamical behaviour of the Universe as a whole. These solutions, called the Friedmann models [1], form the basis of the Big Bang theory. But is the Cosmological Principle true? Not according to Francesco Sylos-Labini et al. [2], who argue, controversially, that the Universe is not uniform at all, but has a never-ending hierarchical structure in which galaxies group together in clusters which, in turn, group together in superclusters, and so on.

These claims are completely at odds with the Cosmological Principle and therefore with the Friedmann models and the entire Big Bang theory. The central thrust of the work of Sylos-Labini et al. is that the statistical methods used by cosmologists to analyse galaxy clustering data are inappropriate because they assume the property of large-scale homogeneity at the outset. If one does not wish to assume this then one must use different methods.

What they do is to assume that the Universe is better described in terms of a fractal set characterized by a fractal dimension D. In a fractal set, the mean number of neighbours of a given galaxy within a volume of radius R is proportional to RD. If galaxies are distributed uniformly then D = 3, as the number of neighbours simply depends on the volume of the sphere, i.e. as R3 and the average number-density of galaxies. A value of D < 3 indicates that the galaxies do not fill space in a homogeneous fashion: D = 1, for example, would indicate that galaxies were distributed in roughly linear structures (filaments); the mass of material distributed along a filament enclosed within a sphere grows linear with the radius of the sphere, i.e. as R1, not as its volume.  Sylos-Labini et al. argue that D = 2, which suggests a roughly planar (sheet-like) distribution of galaxies.

Most cosmologists would accept that the distribution of galaxies on relatively small scales, up to perhaps a few tens of megaparsecs (Mpc), can indeed be described in terms of a fractal model.This small-scale clustering is expected to be dominated by purely gravitational physics, and gravity has no particular length scale associated with it. But standard theory requires that the fractal dimension should approach the homogeneous value D = 3 on large enough scales. According to standard models of cosmological structure formation, this transition should occur on scales of a few hundred Mpc.

The main source of the controversy is that most available three-dimensional maps of galaxy positions are not large enough to encompass the expected transition to homogeneity. Distances must be inferred from redshifts, and it is difficult to construct these maps from redshift surveys, which require spectroscopic studies of large numbers of galaxies.

Sylos-Labini et al. have analysed a number of redshift surveys, including the largest so far available, the Las Campanas Redshift Survey [3]; see below. They find D = 2 for all the data they look at, and argue that there is no transition to homogeneity for scales up to 4,000 Mpc, way beyond the expected turnover. If this were true, it would indeed be bad news for the orthodox among us.

The survey maps the Universe out to recession velocities of 60,000 km s-1, corresponding to distances of a few hundred million parsecs. Although no fractal structure on the largest scales is apparent (there are no clear voids or concentrations on the same scale as the whole map), one statistical analysis [2] finds a fractal dimension of two in this and other surveys, for all scales – conflicting with a basic principle of cosmology.

Their results are, however, at variance with the visual appearance of the Las Campanas survey, for example, which certainly seems to display large-scale homogeneity. Objections to these claims have been lodged by Luigi Guzzo [4], for instance, who has criticized their handling of the data and has presented independent results that appear to be consistent with a transition to homogeneity. It is also true that Sylos-Labini et al. have done their cause no good by basing some conclusions on a heterogeneous compilation of redshifts called the LEDA database [5], which is not a controlled sample and so is completely unsuitable for this kind of study. Finally, it seems clear that they have substantially overestimated the effective depth of the catalogues they are using. But although their claims remain controversial, the consistency of the results obtained by Sylos-Labini et al. is impressive enough to raise doubts about the standard picture.

Mainstream cosmologists are not yet so worried as to abandon the Cosmological Principle. Most are probably quite happy to admit that there is no overwhelming direct evidence in favour of global uniformity from current three-dimensional galaxy catalogues, which are in any case relatively shallow. But this does not mean there is no evidence at all: the near-isotropy of the sky temperature of the cosmic microwave background, the uniformity of the cosmic X-ray background, and the properties of source counts are all difficult to explain unless the Universe is homogeneous on large scales [6]. Moreover, Hubble’s law itself is a consequence of large-scale homogeneity: if the Universe were inhomogeneous one would not expect to see a uniform expansion, but an irregular pattern of velocities resulting from large-scale density fluctuations.

But above all, it is the principle of Occam’s razor that guides us: in the absence of clear evidence against it, the simplest model compatible with the data is to be preferred. Several observational projects are already under way, including the Sloan Digital Sky Survey and the Anglo-Australian 2DF Galaxy Redshift Survey, that should chart the spatial distribution of galaxies in enough detail to provide an unambiguous answer to the question of large-scale cosmic uniformity. In the meantime, and in the absence of clear evidence against it, the Cosmological Principle remains an essential part of the Big Bang theory.

References

  1. Friedmann, A. Z. Phys. 10, 377–386 ( 1922).
  2. Sylos-Labini, F., Montuori, M. & Pietronero, L. Phys. Rep. 293, 61-226 .
  3. Shectman, S.et al. Astrophys. J. 470, 172–188 (1996).
  4. Guzzo, L. New Astron. 2, 517–532 ( 1997).
  5. Paturel, G. et al. in Information and Online Data in Astronomy (eds Egret, D. & Albrecht, M.) 115 (Kluwer, Dordrecht,1995).
  6. Peebles, P. J. E. Principles of Physical Cosmology (Princeton Univ. Press, NJ, 1993).

A Sonnet of Significance

Posted in Poetry, The Universe and Stuff with tags , , , on August 3, 2010 by telescoper

Inspired by Dennis Overbye’s nice article in the New York Times about the plethora of false detections in physics and astronomy, and another one in Physics World by Robert P Crease with a similar theme, I’ve decided to relaunch my campaign to become the next Poet Laureate with this Sonnet (in Petrarchean form) which I offer as an homage to John Keats. I’ve slavishly copied the rhyme scheme of one of Keats’ greatest poems, although I think I’ve made all the lines scan properly which he didn’t manage to do in the original.  Nevertheles, I’m sure that if he were alive today he’d be turning in his grave.

Much have I marvell’d at discov’ries bold
And many gushing press releases seen
But often what is “found” just hasn’t been
(Though only rather later are we told).
Be doubtful if you ever do behold
A scientific “certainty” between
The pages of a Sunday magazine;
The complex truth is rarely so extolled.
So if you are a watcher of the skies
Or particle detection is your yen,
Refrain from spreading rumour and surmise
Lest you look silly time and time again.
Two sigma peaks – so you should realise –
Are naught but noise, so hold your tongue. Amen.

Crater 308

Posted in Art, The Universe and Stuff with tags , , , , , on August 1, 2010 by telescoper

I haven’t got time to post much today – WordPress was down earlier when I had a bit of time and now I’m going to watch the highlights of England’s Test victory against Pakistan in the cricket today, which they achieved by bowling out their opponents for only 80 runs in the second innings.

Nevertheless, as a quick filler, I thought it would be nice to show this wonderful image of the crater Daedalus, formerly known as Crater 308, which is located on the far side of the Moon. Not the dark side, by the way, the far side of the Moon gets just as much sunlight as the near side!
This is one of the images I’ve been working on as part of the project Beyond Entropy for a forthcoming exhibit at the Venice Biennale of Architecture which opens at the end of this month. I won’t say too much about the exhibit I’m involved with, except that it explores the way higher-dimensional information can be recorded in surfaces of lower dimension, like a kind of architectural holographic principle. I was particularly struck by the way the pattern of cratering on the Moon yields information about its formation history, which is why I went looking for dramatic examples. This – taken during the Apollo 11 mission- is my favourite image of all those I’ve looked at. I love the complexy topography, its textural contrasts and the way the shadows play across it.

Daedalus is an impact crater that formed about 3.75 to 3.2 bn years ago. It’s about 93km across. The crater looks relatively fresh; showing sharp-ish-looking rims all around with sequences of wonderfully-preserved terraces down onto a pock-marked, flat floor consisting of numerous craterlets and a central peak divided up into two to three well-defined hills. You can also see the effect of more recent impacts in and around it.

Talking of impact, I wonder if I can get this project into our REF submission?

A Martian Oz?

Posted in The Universe and Stuff with tags , , on July 31, 2010 by telescoper

I noticed a news item last week about research which points out that the remarkable fact that parts of Mars look a bit like Australia. Take this image, for example, of the region called Nili Fossae in which the Sydney Opera House can be seen clearly in the upper left…

Apparently the rocks in this region “resemble” those in an area of Australia called the Pilbara. Scientists believe that microbes formed some distinctive features in the Pilbara rocks – features called “stromatolites” that can be seen and studied today. According to  Adrian Brown, who works for the SETI Institute,

“Life made these features. We can tell that by the fact that only life could make those shapes; no geological process could.”

Unfortunately however, all that has really been established is that the Martian rocks have a similar mineral composition to those found in Australia – there’s no evidence (yet) that the “features” made by living creatures are present. Nevertheless, the newspapers have got very excited about this and today’s Guardian even ran an editorial on this item, from which I quote

Sceptics may think the comparison tenuous. They may also note that yesterday’s news reports either framed the possibility as a question – could there be life? – or put it in inverted commas. There is no proof. There is quite likely no life either.

Quite.

I always find it very interesting how everyone gets so worked up about the possibility of there being, or having been, life on Mars when we’re such careless custodians of the flora and fauna of our own planet. I suppose behind it all there’s a hope that there might be sentient beings out there in space who can tell us how to look after ourselves a bit better than we’re able to figure out for ourselves.

Unfortunately, the recent “discovery” provides very strong evidence against there being any form of intelligent life whatsoever on Mars. After all, it’s just like Australia.

A Problem in Dynamics

Posted in Poetry, The Universe and Stuff with tags , , on July 23, 2010 by telescoper

I thought you might enjoy this “poem” which, believe it or not, was written by the great physicist James Clerk Maxwell. You can find other examples of his verse here. All I can say is I’m glad he didn’t give up his day job…

An inextensible heavy chain
Lies on a smooth horizontal plane,
An impulsive force is applied at A,
Required the initial motion of K.

Let ds be the infinitesimal link,
Of which for the present we’ve only to think;
Let T be the tension, and T + dT
The same for the end that is nearest to B.
Let a be put, by a common convention,
For the angle at M ’twixt OX and the tension;
Let Vt and Vn be ds’s velocities,
Of which Vt along and Vn across it is;
Then Vn/Vt the tangent will equal,
Of the angle of starting worked out in the sequel.

In working the problem the first thing of course is
To equate the impressed and effectual forces.
K is tugged by two tensions, whose difference dT
Must equal the element’s mass into Vt.
Vn must be due to the force perpendicular
To ds’s direction, which shows the particular
Advantage of using da to serve at your
Pleasure to estimate ds’s curvature.
For Vn into mass of a unit of chain
Must equal the curvature into the strain.

Thus managing cause and effect to discriminate,
The student must fruitlessly try to eliminate,
And painfully learn, that in order to do it, he
Must find the Equation of Continuity.
The reason is this, that the tough little element,
Which the force of impulsion to beat to a jelly meant,
Was endowed with a property incomprehensible,
And was “given,” in the language of Shop, “inexten-sible.”
It therefore with such pertinacity odd defied
The force which the length of the chain should have modified,
That its stubborn example may possibly yet recall
These overgrown rhymes to their prosody metrical.
The condition is got by resolving again,
According to axes assumed in the plane.
If then you reduce to the tangent and normal,
You will find the equation more neat tho’ less formal.
The condition thus found after these preparations,
When duly combined with the former equations,
Will give you another, in which differentials
(When the chain forms a circle), become in essentials
No harder than those that we easily solve
In the time a T totum would take to revolve.

Now joyfully leaving ds to itself, a-
Ttend to the values of T and of a.
The chain undergoes a distorting convulsion,
Produced first at A by the force of impulsion.
In magnitude R, in direction tangential,
Equating this R to the form exponential,
Obtained for the tension when a is zero,
It will measure the tug, such a tug as the “hero
Plume-waving” experienced, tied to the chariot.
But when dragged by the heels his grim head could not carry aught,
So give a its due at the end of the chain,
And the tension ought there to be zero again.
From these two conditions we get three equations,
Which serve to determine the proper relations
Between the first impulse and each coefficient
In the form for the tension, and this is sufficient
To work out the problem, and then, if you choose,
You may turn it and twist it the Dons to amuse.

Off the Main Sequence…

Posted in Biographical, The Universe and Stuff with tags , , , , , , , on July 22, 2010 by telescoper

When I was at School, one of my English teachers enjoyed setting creative writing challenges for homework. One of the things he liked to do was to give us two apparently separate topics and get us to write a short story that managed to tie them together. Although I seldom got good marks I now realise that this is quite a useful skill to develop.  Sometimes, when I’ve been at a loss for something  to blog about, I’ve taken two items from the news and tried to link them somehow. That’s also how a lot of satire works – many of the best Private Eye skits involve putting two pieces of news together in a way that’s deliberately back to front. In fact many writers have commented along similar lines,  the most famous being E. M. Forster, whose advice to a young writer was “Only Connect”.

Yesterday the news was full of stories emanating from the discovery of a very massive star, in fact the most massive one ever found.  This news also got the Jonathan Amos treatment on the  BBC science website too. I think it’s quite an interesting discovery but it  didn’t generate much enthusiasm from Lord Rees who wrote in a Guardian article

I don’t view this discovery as a big breakthrough. It’s a bit bigger than other stars of this kind that we’ve seen and it’s nice that it involves British scientists and the world’s biggest telescope. It’s a step forward, but it is not more than an incremental advance in our knowledge.

What’s interesting about this star is that it may shed some light – actually, rather a lot of light, because it’s 10,000,000 times brighter than the Sun – on the properties of very big stars as well as possibly how they form.

There was even an item on local radio last night, which reported

The biggest star ever discovered was recently found by astronomers in Sheffield.

You’d think if it was that bright and so nearby somebody in Sheffield would have noticed it long before now…

A star this big – about 300 times the mass of the Sun – operates on the same basic mechanism as the Sun but the quantitative details are very different. Its surface temperature is about 40,000 Kelvin compared to the Sun’s, which is only about 6000K, so the radiation field it generates is very much more powerful. It’s also very much larger, probably about 50 times the Sun’s radius, so there’s more surface area to radiate. It’s a very big and very bright beastie.

The name of this star is R136a1 but given its new status as media star, it really needs a better one. In fact, there’s a suggestions page here. Let me see. Overweight and prominent in the media? No Eamonn Holmes gags please.

A star is basically just a ball of hot gas which exerts pressure forces that balance the force of gravity, which tries to make it collapse, in a form of hydrostatic equilibrium. With so much mass to hold up the pressure in the centre of the star has to be very large, and it therefore has to be very hot. The energy needed to keep it hot comes from nuclear reactions that mainly burn hydrogen to make helium (as in the Sun), but the rate of these processes is sensitively dependent on the temperature and density in the star’s core. The Sun is a relatively sedate pressure-cooker that will  simmer away for billions of years. A monster like the one just found guzzles fuel at such a rate that its lifetime will only be a few million years. Like megastars in other fields, this one will live fast and die young.

Nobody really knows how big the biggest star should be. Very big stars are produce such intense radiation that radiation pressure is more important than gas pressure in supporting the star against collapse, but if the star is too big (and therefore too hot) then the radiation field will blow the star apart. This is when the so-called Eddington Limit is reached.  Where the line is drawn isn’t all that clear. The new star  suggests that it is a bit higher up the mass scale than previously thought. I think it’s interesting.

I’ve written about this star partly to make a point about how wonderful astronomy is for teaching physics. To understand how a star works you need to take into account thermal physics, gravity, nuclear physics, radiative transport and whole load of other things besides. Putting all that physics together to produce a stellar model is a great way to illustrate the much-neglected synthetic (rather than analytic) side of (astro)physical theory education. Stars are good.

Cue cheesy link to another item.

The single biggest step towards the understanding of stellar structure and evolution was the Hertzsprung-Russel diagram, or HR diagram for short, which shows that there is a Main Sequence of stars (to which the Sun belongs). Main sequence stars have luminosities and temperatures that are related to each other because they are both determined by the star’s mass. That’s because they’re all described by the same basic physics – hydrostatitic equilibrium, nuclear burning, etc – but just come in different masses. They adjust their temperature and luminosity in order to find an equilibrium configuration.

Not all stars are main sequence stars, however. There are classes of stars with different things going on and these lie in other regions of the HR diagram.

With this in mind, the Astronomy Blog has constructed an amusing career-related version of the HR diagram which I’ve reproduced here:

Instead of plotting temperature against luminosity (or, to be precise, colour against magnitude) as in the standard version this one plots academic publications against google hits, which purport to be a measure of “fame”. A traditional academic will presumably acquire fame through their publications only, thus defining a main sequence, whereas some lie off that sequence because of media work, blogging, or (perhaps) involvement in a juicy sex scandal. I don’t think fame and notoriety are distinguished in this calculation.

I know quite a few colleagues have been quietly calculating where they lie on the above diagram, as indeed have I. Vanity, you see, is very contagious. I’m not named on the version shown, but I can tell you that I’m much more famous than Andy Lawrence, who is. So there.

Lines on the non-Discovery of the Higgs Boson

Posted in Poetry, The Universe and Stuff with tags , , on July 14, 2010 by telescoper

In search of fame I spread around
A
rumour that the Higgs was found;
But now it’s clear
it wasn’t true,
My career has just gone down the loo.

 

(by Peter Coles, aged 47½)