Archive for Sloan Digital Sky Survey

Three New Publications at the Open Journal of Astrophysics

Posted in OJAp Papers, Open Access, The Universe and Stuff with tags , , , , , , , , , , , , , , , , , on January 20, 2024 by telescoper

As promised yesterday, it’s time for a roundup of the week’s business at the  Open Journal of Astrophysics. This past week we have published three papers, taking  the count in Volume 7 (2024) up to 4 and the total published by OJAp up to 119. There are quite a few more ready to go as people return from the Christmas break.

In chronological order, the three papers published this week, with their overlays, are as follows. You can click on the images of the overlays to make them larger should you wish to do so.

First one up is “Prospects for studying the mass and gas in protoclusters with future CMB observations” by  Anna Gardner and Eric Baxter (Hawaii, USA), Srinivasan Raghunathan (NCSA, USA), Weiguang Cui (Edinburgh, UK), and Daniel Ceverino (Madrid, Spain). This paper, published on 17th January 2024, uses realistic hydrodynamical simulations to probe the ability of CMB Stage 4-like (CMB-S4) experiments to detect and characterize protoclusters via gravitational lensing and the Sunyaev-Zel’dovich effect. This paper is in the category of Cosmology and Nongalactic Astrophysics.

Here is a screen grab of the overlay, which includes the abstract:

 

You can find the officially accepted version of the paper on the arXiv here.

The second paper to announce is “SDSS J125417.98+274004.6: An X-ray Detected Minor Merger Dual AGN” and is by Marko Mićić, Brenna Wells, Olivia Holmes, and Jimmy Irwin (all of the University of Alabama, USA).  This presents the discovery of a dual AGN in a merger between the galaxy SDSS J125417.98+274004.6 and dwarf satellite, studied using X-ray observations from the Chandra satellite. The paper was also published on 18th January 2024 in the category Astrophysics of Galaxies . You can see the overlay here:

 

The accepted version of this paper can be found on the arXiv here.

The last paper of this batch is  entitled “Population III star formation: multiple gas phases prevent the use of an equation of state at high densities” and the authors are:  Lewis Prole (Maynooth, Ireland), Paul Clark (Cardiff, UK), Felix Priestley (Cardiff, UK), Simon Glover (Heidelberg, Germany) and John Regan (Maynooth, Ireland). This paper, which presents a comparison of results obtained using chemical networks and a simpler equation-of-state approach for primordial star formation (showing the limitations of the latter) was published on 19th January 2024 and also in the folder marked Astrophysics of Galaxies.

Here is the overlay:

 

You can find the full text for this one on the arXiv here.

And that concludes the update. There’ll be more next week!

 

An Interactive Map of the Universe

Posted in The Universe and Stuff with tags , on November 21, 2022 by telescoper

There’s a new interactive map of the Universe created by astronomers at Johns Hopkins University using data from the Sloan Digital Sky Survey. You can read all about it here There’s also a nice video to watch:

The picture at the top of this post is not the actual map, it’s just a publicity poster. You can play with the fully interactive version here.

This reminds me that when I started as a researcher in cosmology, back in 1985, the biggest galaxy redshift survey available had only just over a thousand galaxies in it and probed only a tiny fraction of the volume of the Universe that has now been mapped, i.e. only out to a redshift of about 0.05.

I think this is called progress!

Celebrating the Sloan Telescope

Posted in The Universe and Stuff with tags , , , , , , , , on May 9, 2018 by telescoper

A little bird tweeted at me this morning that today is the 20th anniversary of first light through the Sloan Telescope (funded by the Alfred P. Sloan Foundation) which has, for the past two decades, been surveying as much of the sky as it can from its location in New Mexico (about 25% altogether): the Sloan Digital Sky Survey is now on its 14th data release.

Here’s a picture of the telescope:

For those of you who want the optical details, the Sloan Telescope is a 2.5-m f/5 modified Ritchey-Chrétien altitude-azimuth telescope located at Apache Point Observatory, in south east New Mexico (Latitude 32° 46′ 49.30″ N, Longitude 105° 49′ 13.50″ W, Elevation 2788m). A 1.08 m secondary mirror and two corrector lenses result in a 3° distortion-free field of view. The telescope is described in detail in a paper by Gunn et al. (2006).

A 2.5m telescope of modest size by the standards of modern astronomical research, but the real assets of the Sloan telescope is a giant mosaic camera, highly efficient instruments and a big investment in the software required to generate and curate the huge data sets it creates. A key feature of SDSS is that its data sets are publicly available and, as such, they have been used in countless studies by a huge fraction of the astronomical community.

The Sloan Digital Sky Survey’s original `legacy’ survey was basically a huge spectroscopic redshift survey, mapping the positions of galaxies and quasars in three dimensions to reveal the `cosmic web’ in unprecedented detail:

As it has been updated and modernised, the Sloan Telescope has been involved in a range of other surveys aimed at uncovering different aspects of the universe around us, including several programmes still ongoing.

Mapping the Universe

Posted in The Universe and Stuff with tags , , on August 5, 2017 by telescoper

Following yesterday’s post, here’s a nice visualisation of how much (and indeed how little) of the Universe the latest galaxy surveys have mapped.

In this animation the Earth is at the centre, and the dots represent observed galaxies, with distances are estimated using redshifts Every blue dot in the animation is a galaxy measured by the Dark Energy Survey. Gold dots are galaxies in the DES supernova fields (measured by OzDES) and red dots are from the Sloan Digital Sky Survey. The dark space in between the surveys is yet to be mapped….

Fourier-transforming the Universe

Posted in The Universe and Stuff with tags , , , on November 20, 2015 by telescoper

Following the little post I did on Tuesday in reaction to a nice paper on the arXiv by Pontzen et al., my attention was drawn today to another paper e related to the comment I made about using Fourier phases as a diagnostic of pattern morphology. The abstract of this one, by Way et al., is as follows:

We compute the complex 3D Fourier transform of the spatial galaxy distribution in a volume-limited sample of the Sloan Digital Sky Survey redshift survey. The direct unbinned transform yields results quite similar to those from the Fast Fourier Transform (FFT) of finely binned galaxy positions. In both cases deconvolution of the sampling window function yields estimates of the true 3D transform. The Fourier amplitudes resulting from this simple procedure yield power spectrum estimates consistent with those from other much more complicated approaches. We demonstrate how the corresponding Fourier phase spectrum lays out a simple and complete characterization of non-Gaussianity that is more easily interpretable than the tangled, incomplete multi-point methods conventionally used. Measurements based on the complex Fourier transform indicate departures from exact homogeneity and isotropy at the level of 1% or less. Our model-independent analysis avoids statistical interpretations, which have no meaning without detailed assumptions about a hypothetical process generating the initial cosmic density fluctuations.

It’s obviously an excellent piece of work because it cites a lot of my papers!

But seriously I think it’s very exciting that we now have data sets of sufficient size and quality to allow us to go beyond the relatively crude statistical description provided by the power spectrum.

 

A Flight Through the Universe

Posted in The Universe and Stuff with tags , , on August 15, 2012 by telescoper

Today I’m taking a flight back from Copenhagen to London, a flight through a very small part of the Universe, so it seems apt to put it in perspective by posting this nice video produced on behalf of the the Sloan Digital Sky Survey. I’ve even had the nerve to copy the blurb:

This animated flight through the universe was made by Miguel Aragon of Johns Hopkins University with Mark Subbarao of the Adler Planetarium and Alex Szalay of Johns Hopkins. There are close to 400,000 galaxies in the animation, with images of the actual galaxies in these positions (or in some cases their near cousins in type) derived from the Sloan Digital Sky Survey (SDSS) Data Release 7. Vast as this slice of the universe seems, its most distant reach is to redshift 0.1, corresponding to roughly 1.3 billion light years from Earth. SDSS Data Release 9 from the Baryon Oscillation Spectroscopic Survey (BOSS), led by Berkeley Lab scientists, includes spectroscopic data for well over half a million galaxies at redshifts up to 0.8 – roughly 7 billion light years distant – and over a hundred thousand quasars to redshift 3.0 and beyond.

Click here for more information about BOSS and the latest data release.

Cosmic Clumpiness Conundra

Posted in The Universe and Stuff with tags , , , , , , , , , , , , , , on June 22, 2011 by telescoper

Well there’s a coincidence. I was just thinking of doing a post about cosmological homogeneity, spurred on by a discussion at the workshop I attended in Copenhagen a couple of weeks ago, when suddenly I’m presented with a topical hook to hang it on.

New Scientist has just carried a report about a paper by Shaun Thomas and colleagues from University College London the abstract of which reads

We observe a large excess of power in the statistical clustering of luminous red galaxies in the photometric SDSS galaxy sample called MegaZ DR7. This is seen over the lowest multipoles in the angular power spectra Cℓ in four equally spaced redshift bins between 0.4 \leq z \leq 0.65. However, it is most prominent in the highest redshift band at z\sim 4\sigma and it emerges at an effective scale k \sim 0.01 h{\rm Mpc}^{-1}. Given that MegaZ DR7 is the largest cosmic volume galaxy survey to date (3.3({\rm Gpc} h^{-1})^3) this implies an anomaly on the largest physical scales probed by galaxies. Alternatively, this signature could be a consequence of it appearing at the most systematically susceptible redshift. There are several explanations for this excess power that range from systematics to new physics. We test the survey, data, and excess power, as well as possible origins.

To paraphrase, it means that the distribution of galaxies in the survey they study is clumpier than expected on very large scales. In fact the level of fluctuation is about a factor two higher than expected on the basis of the standard cosmological model. This shows that either there’s something wrong with the standard cosmological model or there’s something wrong with the survey. Being a skeptic at heart, I’d bet on the latter if I had to put my money somewhere, because this survey involves photometric determinations of redshifts rather than the more accurate and reliable spectroscopic variety. I won’t be getting too excited about this result unless and until it is confirmed with a full spectroscopic survey. But that’s not to say it isn’t an interesting result.

For one thing it keeps alive a debate about whether, and at what scale, the Universe is homogeneous. The standard cosmological model is based on the Cosmological Principle, which asserts that the Universe is, in a broad-brush sense, homogeneous (is the same in every place) and isotropic (looks the same in all directions). But the question that has troubled cosmologists for many years is what is meant by large scales? How broad does the broad brush have to be?

At our meeting a few weeks ago, Subir Sarkar from Oxford pointed out that the evidence for cosmological homogeneity isn’t as compelling as most people assume. I blogged some time ago about an alternative idea, that the Universe might have structure on all scales, as would be the case if it were described in terms of a fractal set characterized by a fractal dimension D. In a fractal set, the mean number of neighbours of a given galaxy within a spherical volume of radius R is proportional to R^D. If galaxies are distributed uniformly (homogeneously) then D = 3, as the number of neighbours simply depends on the volume of the sphere, i.e. as R^3, and the average number-density of galaxies. A value of D < 3 indicates that the galaxies do not fill space in a homogeneous fashion: D = 1, for example, would indicate that galaxies were distributed in roughly linear structures (filaments); the mass of material distributed along a filament enclosed within a sphere grows linear with the radius of the sphere, i.e. as R^1, not as its volume; galaxies distributed in sheets would have D=2, and so on.

The discussion of a fractal universe is one I’m overdue to return to. In my previous post  I left the story as it stood about 15 years ago, and there have been numerous developments since then. I will do a “Part 2” to that post before long, but I’m waiting for some results I’ve heard about informally, but which aren’t yet published, before filling in the more recent developments.

We know that D \simeq 1.2 on small scales (in cosmological terms, still several Megaparsecs), but the evidence for a turnover to D=3 is not so strong. The point is, however, at what scale would we say that homogeneity is reached. Not when D=3 exactly, because there will always be statistical fluctuations; see below. What scale, then?  Where D=2.9? D=2.99?

What I’m trying to say is that much of the discussion of this issue involves the phrase “scale of homogeneity” when that is a poorly defined concept. There is no such thing as “the scale of homogeneity”, just a whole host of quantities that vary with scale in a way that may or may not approach the value expected in a homogeneous universe.

It’s even more complicated than that, actually. When we cosmologists adopt the Cosmological Principle we apply it not to the distribution of galaxies in space, but to space itself. We assume that space is homogeneous so that its geometry can be described by the Friedmann-Lemaitre-Robertson-Walker metric.

According to Einstein’s  theory of general relativity, clumps in the matter distribution would cause distortions in the metric which are roughly related to fluctuations in the Newtonian gravitational potential \delta\Phi by \delta\Phi/c^2 \sim \left(\lambda/ct \right)^{2} \left(\delta \rho/\rho\right), give or take a factor of a few, so that a large fluctuation in the density of matter wouldn’t necessarily cause a large fluctuation of the metric unless it were on a scale \lambda reasonably large relative to the cosmological horizon \sim ct. Galaxies correspond to a large \delta \rho/\rho \sim 10^6 but don’t violate the Cosmological Principle because they are too small to perturb the background metric significantly. Even the big clumps found by the UCL team only correspond to a small variation in the metric. The issue with these, therefore, is not so much that they threaten the applicability of the Cosmological Principle, but that they seem to suggest structure might have grown in a different way to that usually supposed.

The problem is that we can’t measure the gravitational potential on these scales directly so our tests are indirect. Counting galaxies is relatively crude because we don’t even know how well galaxies trace the underlying mass distribution.

An alternative way of doing this is to use not the positions of galaxies, but their velocities (usually called peculiar motions). These deviations from a pure Hubble flow are caused by lumps of matter pulling on the galaxies; the more lumpy the Universe is, the larger the velocities are and the larger the lumps are the more coherent the flow becomes. On small scales galaxies whizz around at speeds of hundreds of kilometres per second relative to each other, but averaged over larger and larger volumes the bulk flow should get smaller and smaller, eventually coming to zero in a frame in which the Universe is exactly homogeneous and isotropic.

Roughly speaking the bulk flow v should relate to the metric fluctuation as approximately \delta \Phi/c^2 \sim \left(\lambda/ct \right) \left(v/c\right).

It has been claimed that some observations suggest the existence of a dark flow which, if true, would challenge the reliability of the standard cosmological framework, but these results are controversial and are yet to be independently confirmed.

But suppose you could measure the net flow of matter in spheres of increasing size. At what scale would you claim homogeneity is reached? Not when the flow is exactly zero, as there will always be fluctuations, but exactly how small?

The same goes for all the other possible criteria we have for judging cosmological homogeneity. We are free to choose the point where we say the level of inhomogeneity is sufficiently small to be satisfactory.

In fact, the standard cosmology (or at least the simplest version of it) has the peculiar property that it doesn’t ever reach homogeneity anyway! If the spectrum of primordial perturbations is scale-free, as is usually supposed, then the metric fluctuations don’t vary with scale at all. In fact, they’re fixed at a level of \delta \Phi/c^2 \sim 10^{-5}.

The fluctuations are small, so the FLRW metric is pretty accurate, but don’t get smaller with increasing scale, so there is no point when it’s exactly true. So lets have no more of “the scale of homogeneity” as if that were a meaningful phrase. Let’s keep the discussion to the behaviour of suitably defined measurable quantities and how they vary with scale. You know, like real scientists do.

The Citation Game

Posted in Science Politics with tags , , , on April 8, 2010 by telescoper

Last week I read an interesting bit of news in the Times Higher that the forthcoming Research Excellence Framework (REF) seems to be getting cold feet about using citation numbers as a metric for quantifying research quality. I shouldn’t be surprised about that, because I’ve always thought it was very difficult to apply such statistics in a meaningful way. Nevertheless, I am surprised – because meaningfulness has never seemed to me to be very high on the agenda for the Research Excellence Framework….

There are many issues with the use of citation counts, some of which I’ve blogged about before, but I was interested to read another article in the Times Higher, in this weeks issue, commenting on the fact that some papers have ridiculously large author lists. The example picked by the author, Gavin Fairbairn (Professor of Ethics and Language at Leeds Metropolitan University), turns out – not entirely surprisingly – to be from the field of astronomy. In fact it’s The Sloan Digital Sky Survey: Technical Summary which is published in the Astronomical Journal and has 144 authors. It’s by no means the longest author list I’ve ever seen, in fact, but it’s certainly very long by the standards of the humanities. Professor Fairbairn goes on to argue, correctly, that there’s no way every individual listed among the authors could have played a part in the writing of the paper. On the other hand, the Sloan Digital Sky Survey is a vast undertaking and there’s no doubt that it required a large number of people to make it work. How else to give them credit for participating in the science than by having them as authors on the paper?

Long author lists are increasingly common in astronomy these days, not because of unethical CV-boosting but because so many projects involve large, frequently international, collaborations. The main problem from my point of view, however, is not the number of authors, but how credit is assigned for the work in exercises like the REF.

The basic idea about using citations is fairly sound: a paper which is important (or “excellent”, in REF language) will attract more citations than less important ones because more people will refer to it when they write papers of their own. So far, so good. However the total number of citations for even a very important paper depends on the size and publication rate of the community working in the field. Astronomy is not a particularly large branch of the physical sciences but is very active and publication rates are high, especially when it comes to observational work.  In condensed matter physics citation rates are generally a lot lower, but that’s more to do with the experimental nature of the subject. It’s not easy, therefore, to compare from one field to another. Setting that issue to one side, however, we come to the really big issue, which is how to assign credit to authors.

You see, it’s not authors that get citations, it’s papers. Let’s accept that a piece of work might be excellent and that this excellence can be quantified by the number of citations N it attracts. Now consider a paper written by a single author that has excellence-measure N versus a paper with 100 authors that has the same number of citations. Don’t you agree that the individual author of the first paper must have generated more excellence than each of the authors of the second? It seems to me that it stands to reason that the correct way to apportion credit is to divide the number of citations by the number of authors (perhaps with some form of weighting to distinguish drastically unequal contributions). I contend that such a normalized citation count is the only way to quantify the excellence associated with an individual author.

Of course whenever I say this to observational astronomers they accuse me of pro-theory bias, because theorists tend to work in smaller groups than observers. However, that ignores the fact that not doing what I suggest leads to a monstrous overcounting of the total amout of excellence. The total amount of excellence spread around the community for the second paper in my example is not N but 100N. Hardly surprising, then, that observational astronomers tend to have such large h-indices – they’re all getting credit for each others contributions as well as their own! Most observational astronomers’ citation measures reduce by a factor of 3 or 4 when they’re counted properly.

I think of the citation game as being a bit like the National Lottery. Writing a paper is like buying a ticket. You can buy one yourself, or you can club together and buy one as part of a syndicate. If you win with your own ticket, you keep the whole jackpot. If a syndicate wins, though, you don’t expect each member to win the total amount – you have to share the pot between you.