A frequent complaint raised by students of Astronomy is that astronomers insist on using funny units. Chief among them is the use of magnitudes to quanitify the brightness of an object. Why not use the observed intensity (or brightness or flux) of the light from the star, which can be expressed straightforwardly in SI units, instead of faffing around with a clunky logarithmic measure? The reason we use the magnitude scale is primarily historical and based on the fact that the eye’s response to light is more-or-less logarithmic and that in the days before calculators it was easier to deal with very large and very small numbers using logarithms.Most relevant calculations involve divisions and multiplications which become subtractions and additions when you use logarithmic quantities.
It was Norman Pogson who first suggested that a magnitude scale be defined such that a difference of five magnitudes should correspond to a factor of 100 in actual brightess. This was because the brightest naked-eye stars – those of first magnitude – are about 100 times brighter than the faintest naked-eye stars, which are of sixth magnitude. That was in 1856 and we’ve been stuck with it ever since!
Although the magnitude system may appear strange, it’s not really that hard to use when you get used to it. A beginner really just needs to know a few key things:
- Bright things have lower magnitudes (e.g. first magnitude stars are brighter than second magnitude stars);
- If two stars have apparent magnitudes
and
respectively then
where
and
are respectively the fluxes received from the two stars;
- The intensity of light falls off with the square of the distance from the source;
- The absolute magnitude is the apparent magnitude a star would have if it were 10 parsecs from the observer;
- Most stars have roughly black-body spectra so their total intrinsic luminosity depends on the product of their surface area (i.e. on the square of the radius) and the fourth power of the surface temperature.
Got it?
To test your understanding you could try these little problems. To warm up you might look at I posted the first of them a while ago. Anyway, here we go:
- A binary system at a distance of 100 pc has such a small separation between its component stars that it is unresolved by a telescope. If the apparent visual magnitude of the combined image of the system is 10.5, and one star is known to have an absolute visual magnitude of 9.0, what is the absolute visual magnitude of the other star?
- Two stars are observed to have the same surface temperature, but their apparent visual magnitudes differ by 5. If the fainter star is known to be twice as far away as the brighter one, what is the ratio of the radii of the two stars?
- A binary system consists of a red giant star and a main-sequence star of the same intrinsic luminosity. The red giant has a radius 50 times that of the main-sequence star. (i) If the main-sequence star has a surface temperature of 10,000 K, what is the surface tempature of the red giant star? (ii) If the two stars can’t be resolved the combined system has an apparent magnitude of 12, what are the apparent magnitudes the two component stars would have if they could be observed separately?
Answers through the comments box please! The first correct entry wins a year’s free subscription to the Open Journal of Astrophysics…
UPDATE: Apologies for having forgotten about this post for ages. The answers are:
- Absolute magnitude 5.54 (apparent magnitude 10.54)
- 5:1
- (i) ~1400K (ii) 12.75, 12.75