Astronomy

What limits the use of the H-R diagram to measure distance (main sequence fitting), what distances is it useful for?

What limits the use of the H-R diagram to measure distance (main sequence fitting), what distances is it useful for?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Is it only possible to measure objects that form around the same time? Is it possible to measure clusters from distant galaxies other than our own?


It depends how precise you need to be. Main sequence fitting assumes that the star(s) in question is(are) on the main sequence. If you have a cluster of stars (at the same age) then defining what is on the main sequence and what isn't becomes much, much easier and of course you have lots of stars with which to beat down the statistical uncertainty.

In fact, for coeval groups of stars what you term "main sequence fitting" is rarely done. The process is the fitting of an "isochrone" (a line that links points in the HR diagram at a single age), so that both the distance, age (also extinction and metallicity) are possible free parameters.

The subtle distinction here is that what is termed "the main sequence" does not really exist in practice, or at least not as a uniquely defined locus in the HR diagram. Star begin their main sequence lives on the "zero age main sequence" (ZAMS) and end their main sequence careers at the "terminal age main sequence" (TAMS), gradually changing their luminosity and temperature as they do so.

Here is a diagram from Martignoni et al. (2014) showing the ZAMS and TAMS for stars of different masses. there is typically a factor 2-3 in luminosity between them (a larger difference at larger masses). That means whether you use the ZAMS, the TAMS, or something in between to determine the distance from a vertical displacement in the HR diagram, you could vary your answer for the distance by $sqrt{2}$ to $sqrt{3}$. In other words you need to know the age of a main sequence star before "main-sequence fitting" can give you an accurate distance.

Of course, lower mass stars are longer lived. Anything of say 0.7 solar masses or below has hardly moved from the ZAMS in the age of the Galaxy, so there would be little error in assuming a ZAMS locus. Conversely, the effects of age are far more rapid and therefore far more important on the main sequence at higher masses.

If you were to try to use main-sequence fitting to estimate the distance to individual stars then there are several hazards. For one, it can be nearly impossible to estimate the age of an individual star. Therefore if it has a mass greater than 0.7 solar masses then there will be an uncertainty in its position in the absolute HR diagram that leads to an inevitable uncertainty in estimated distance. Further, the intrinsic position in the HR diagram depends on the star's chemical composition. Such ancillary information might be available, but it might not, in which case that is another source of error. A further source of systematic uncertainty is stellar rotation. Fast rotating stars have extended lifetimes on the main sequence and somewhat different intrinsic positions on the HR diagram; again a source of systematic uncertainty that is especially problematic for high mass stars. Finally, it can be that what you think is an isolated main sequence star is in fact a binary system. A companion can increase the luminosity of the system and make a star appear closer than it actually is (by up to a factor $sqrt{2}$).


Using supernovae to measure distances

The methods we've described so far can only reach out to the nearest galaxy clusters. If we wish to probe deeper into the universe, we need to find new ways to estimate distances. The methods I'll describe today each have a strong point and a weakness.

What is a supernova?

The short answer is "a star which explodes."

Now, there are several mechanisms which cause a star to explode, and the pre-explosion object can have several different forms. We'll discuss some of those issues a bit later.

But from an observational point of view, a supernova is a star which suddenly appears in a galaxy, shines as brightly as an entire galaxy for a few weeks or months, and then gradually fades away. Here are a few nice examples of recent supernovae.


Images of SN 2011fe in M101 courtesy of PTF and B. J. Fulton


Images of SN 2014J in M82 courtesy of Scott McNeil

SN 2017eaw in NGC 6946 -- which is bright right now!

Now, these pictures don't really show how bright a supernova can be. In many cases, a supernova can outshine all the stars in its host galaxy for a brief period. For example, the earlier pictures of SN 2011fe in M101 give the impression that the SN was just one little blob of light among many others .


Image of SN 2011fe in M101 courtesy of PTF and B. J. Fulton and cropped by me.

But if one uses a little telescope, like my 12-inch Meade LX200 at the RIT Observatory, and takes only a short exposure, then we can compare the light from the galaxy's nucleus and spiral arms to that from the supernova more clearly. Below is an R-band (red light) image taken on Sep 25, 2011, when SN 2011fe was already past its peak brightness and starting to fade.


R-band image of SN 2011fe in M101 courtesy of Michael Richmond and the RIT Observatory

And below is an image taken through a B-band (blue light) filter. The SN, all by itself, produces much, much more blue light than the billions of stars in the nucleus of this galaxy!


B-band image of SN 2011fe in M101 courtesy of Michael Richmond and the RIT Observatory

The good and bad aspects of supernovae as distance indicators

    Good
      Very luminous, so can be seen at very large distances. Astronomers have found supernovae well beyond z=1, with the most distant event at z=3.9! That means that supernovae can reach MUCH farther into space than any other method we have discussed.

The star "S Andromeda" was a type Ia explosion in the Andromeda Galaxy, but it happened just a few decades before astronomers were ready with equipment powerful enough to study it properly.

So many varieties .

There is a very wide variety of supernova classes and sub-classes. It's easy to get lost in the different types and designations. For our purposes, all those fine distinctions aren't necessary. Even though there APPEAR to be many different types, one can in the end separate them into just two varities.

Want some practice? Look at the following pictures, and try to separate the four different animals into two species. All the individuals pictured have two legs, one head, and stand about two meters tall.


Image of ostrich courtesy of berniedup and Wikipedia . Image of Rob Gronkowski courtesy of Wellslogan and Wikipedia. Image of Apollo astronaut courtesy of NASA. Image of Cao Yuan courtesy of Fernando Frazão/Agência Brasil and Wikipedia

(I hope you got this right)

There are just TWO species, even though all the pictures look quite different.

The three different humans LOOK different because they are wearing different amounts of material over their bodies if you could slice each human in half, you'd find the same stuff inside: bone, muscle, blood, etc.

  1. the center of the core (usually) collapses into a neutron star or black hole
  2. the (relatively massive) outer layers of the star are heated to

50,000 K and fly off into space at speeds of


Image copyright David Hardy arise from binary-star systems in which an ordinary main sequence star is close to a carbon-oxygen white dwarf (the single degenerate scenario). Material from the main-sequence star can -- under the right circumstances -- escape from the outer atmosphere and form an accretion disk around the white dwarf. If the rate of mass accretion onto the white dwarf falls into the proper range, then the white dwarf's mass may eventually reach the Chandrasekhar limit, about 1.4 solar masses. At that point, little regions of thermonuclear reactions near the center of the white dwarf may enter a runaway instability, turning most of the white dwarf from C-O to Fe-group elements and producing enough energy to blow the entire star out into space.

Well, that's one possibility. Another is that TWO white dwarfs in a close orbit may eventually merge (the double degenerate scenario). The merger creates a single object which again exceeds the Chandrasekhar limit, and, once again, Ka-Boom.

  1. the entire white dwarf is destroyed, so there is no remnant
  2. the (relatively small) body of star is heated to

50,000 K and flies off into space at speeds of

In both cases, very roughly 10 51 ergs of energy are released by the, um, complex processes occuring within the central regions of the progenitor star. So, to a very very rough approximation, in both cases, we see the same thing: an expanding cloud of extremely hot gas, flying outward at very high speeds, reaching absolute magnitudes of Mv

How can we use this big explosion to measure a distance?

Expanding Photosphere Method with Type IIP

  • the innermost core becomes a neutron star or black hole
  • huge numbers of neutrinos fly out into space immediately
  • a shock wave slowly (over the course of hours) pushes its way through the envelope to the photosphere

The shock wave heats the bulk of the star and accelerates it outward, to speeds far above the escape velocity. In a word, the star explodes. However, if one looks more closely, one finds that the velocities of different layers of the star vary in a systematic way: material from the inner regions has a relatively small velocity, while material from the outer regions has a relatively high velocity. We call this homologous expansion. (Click on figure below to animate)

The shock wave heats material up to very high temperatures, well over 100,000 Kelvin, ionizing all the hydrogen. The outermost layers emit X-rays and UV for several hours after the shock breaks out of the star, but then cool off quickly. When the temperature of the gas falls to roughly 6000 Kelvin, however, the hydrogen begins to recombine. At this point, as the outermost layer switches from ionized to neutral, its opacity drops: neutral hydrogen gas is MUCH more transparent than ionized gas.

When the outermost layer recombines, it becomes essentially transparent, and we can see into the layer below it. This layer is still hot enough that hydrogen is ionized . but within a short time, it, too, cools off to about 6000 Kelvin and recombines. When it becomes transparent, we can then see into the NEXT layer of the star, and so on and so on.

  • the outermost "visible" material is defined by the region in which hydrogen is starting to recombine
  • this outermost layer will have a reasonably well-defined temperature of about 6000 K
  • the layer is moving outward quickly

As time goes by, and we see further into the star, the velocity of this special layer will decrease. The graph below shows the evolution of a computer model of a Type II supernova -- the lines are drawn at roughly one-week intervals.


Figure taken from Kasen and Woosley, ApJ 703, 2205 (2009)

The solid curve in the figure below shows the evolution of a computer model of an exploding star, while the circles show measurements of real supernovae.


Figure taken from Kasen and Woosley, ApJ 703, 2205 (2009)

The radius of any particular layer of material at some time t can be written as

where t0 is the time of explosion and R0 is the radius of that layer at the time of explosion. With high-quality spectra of the supernova, we can measure the velocity of of one particular layer of the star -- the layer which happens to be acting as the photosphere at the moment.

Because the photosphere is in such a simple condition -- nearly pure hydrogen, at a temperature close to 6000 K -- it isn't too difficult to compute the radiation it emits. To first order, the photosphere acts like a "dilute blackbody", emitting flux

where T is the temperature

6000 K, and &xi is a "dilution factor", inserted into the ordinary blackbody equation to account for several factors which cause the spectrum of the actual star to differ from that of a perfect blackbody.

Using spectroscopy and photometry, we can measure v(t) and the observed flux f(t). If we make measurements at several different times, when the photosphere has different sizes and luminosities, we have enough information to solve for the time of explosion, initial size of the star, and determine the distance as well it's just a matter of comparing the luminosity F(t) to the observed flux f(t) and using the inverse square law (and hoping that there was no extinction, etc.).

It is a nice coincidence that the speed with which the recombination wave runs deeper into the star is very roughly the same as the rate at which the star is expanding in other words, the radius of the apparent photosphere doesn't change a great deal while the wave is still moving through the envelope. As a result, the luminosity of Type IIP supernovae hits a plateau (hence the "P") and remains nearly constant for a month or two.


Figure taken from Jones and Hamuy, RMxAC, 35, 310 (2009)

As an example of this technique, let's look at the nearby SN IIP 2013ej in M74. Its light curves show clear evidence for a "plateau" as the photosphere recedes into the ejecta.


Figure 3 of Richmond, JAVSO 42, 333 (2014)

We'll use only the V-band measurements for this quick little in-class calculation. Let's pick the date JD = 2456510, which is just a bit after the time of maximum light.


Taken from Figure 3 of Richmond, JAVSO 42, 333 (2014)

If one assumes that the photosphere is emitting like a blackbody, then one can estimate the temperature by fitting the measured fluxes to spectra of Planck functions for different temperatures. The temperature will decrease with time, in general.


Taken from Figure 8 of Richmond, JAVSO 42, 333 (2014)

We'll need to know the velocity of this portion of the ejecta in order to figure out its size. Observations and modelling by a number of researchers suggests that the explosion occurred on JD 2,456,494, so our chosen date of 2,456,510 is about 16 days after explosion.


Figure 2a modified slightly from Valenti et al., MNRAS 438, L101 (2014)

  • At the temperature you've estimated, about 8 percent of a blackbody's radiation falls into the V passband
  • An object of apparent magnitude V = 0 has a flux above the Earth's atmosphere of roughly 3.16 x 10 -6 ergs per sq.cm per second
  • in this case, the "dilution factor" is approximately &xi = 0.5

So, you should now be able to make a rough guess at the distance to this supernova.

When I went through a more complicated procedure using data from this supernova, I found that the Expanding Photosphere Method yielded a distance of about 9.1 +/- 0.4 Mpc. Other methods suggest a similar distance. How does your value compare?

    weaknesses
    • the real photosphere is not a blackbody
    • the layer of gas producing most of the light may not be the same as the layer producing the absorption lines, from which we measure velocity

    We can apply this technique to large distances, because Type IIP SNe are very luminous: their typical absolute magntiudes are between -15 and -18. Look at the example of SN 2013eq!


    Table 4 taken from Gall et al., A&A 592, 129 (2016)

    Type Ia: standard-izable candles

    Now let's consider the "White Dwarf" supernovae. The basic idea for using them as distance indicators is very simple:

    In the Old Days (1970s and 1980s), the collection of measurements was relatively small and inhomogeneous. At that time, it seemed possible -- within the uncertainties -- that all Type Ia SNe had the same absolute luminosity in other words, it seemed possible that they might be standard candles.


    Abstract from Branch and Bettis, AJ 83, 224 (1978)

    However, as astronomers accumulated better measurements and larger samples, it became clear that SNe Ia are not all identical. These supernovae appear to vary in a systematic way.

    For example, if we measure the amount by which supernovae decline in brightness 15 days after maximum light in the B-band,


    Figure taken from Richmond et al., AJ 111, 327 (1996)

    and compare it to the absolute magnitude of the event, we find a clear correlation.

    If we can measure enough SNe Ia to pin down these relationships between absolute magnitude and other observable quantities, we can perhaps turn SNe Ia into standard-izable candles not as nice as truly standard candles, but still useful. There are several groups working on this problem, with slightly different techniques, and both have found some success. The SALT procedure involves choosing one of a set of templates which best fits the light curve of some particular observed SN Ia.


    Figure taken from Guy et al., A&A 443, 781 (2005)

    Using these methods to correct for the relationship between decline rate and luminosity, one can reduce the uncertainty in distance modulus measurements for SNe Ia to perhaps 0.15 magnitudes.

    If one looks at SNe in the near-infrared H-band, they may indeed be nearly identical the Hubble diagram below uses measurements which have NOT been corrected for the decline-rate effect. To be fair, much less work has been done in the near-IR than in the optical.


    Figure taken from Wood-Vasey et al., ApJ 689, 377 (2008)

    One of the reasons astronomers spend so much time trying to understand Type Ia SNe is that they are really, really luminous: their absolute magnitudes are around -19 or -20! That means that they can be seen at VERY large distances, which means that they may be able to test different cosmological models.


    Figure taken from Amanullah et al., ApJ 716, 712 (2010)

    • it suffered little extinction by interstellar material in its host galaxy or in the Milky Way
    • it showed the "typical" or "normal" spectral features
    • it was discovered very soon after the explosion and measured frequently in several optical passbands

    A portion of the B-band light curve measured by Richmond and Smith, JAVSO 40, 872 (2012) is shown in the figure below.


    Modified slightly from Figure 3 of Richmond and Smith, JAVSO 40, 872 (2012)

    1. parallax, with which we can reach .
    2. RR Lyr or TRGB or Cepheids, with which we can reach .
    3. Type Ia SNe

    Still, even with this caveat, type Ia supernovae provide a powerful tool, because we can see them (and measure their properties) SO FAR AWAY!

    For more information

    Copyright © Michael Richmond. This work is licensed under a Creative Commons License.


    By Martin Hardcastle [email protected]>

    Galaxy distances must be measured by a complicated series of inferences
    known as the distance ladder. We can measure the distances to the
    nearest stars by parallax, that is by the apparent motion of the star in
    the sky as a result of the Earth's motion round the Sun. This technique
    is limited by the angular resolution that can be obtained. The
    satellite Hipparcos will provide the best measurements, giving the
    parallax for around 100,000 stars. At present parallax can be used
    accurately to determine the distances of stars within a few tens of
    parsecs from the Sun. [ 1 parsec = 3.26 lt yrs.]

    Statistical methods applied to clusters of stars can be used to extend
    the technique further, as can `dynamical parallax' in which the
    distances of binary stars can be estimated from their orbital
    parameters and luminosities. In this way, or by other methods, the
    distance to the nearest `open clusters' of stars can be estimated
    these can be used to determine a main sequence (unevolved
    Hertzsprung-Russell diagram) which can be fitted to other more distant
    open clusters, taking the distance ladder out to around 7 kpc.
    Distances to `globular clusters', which are much more compact clusters
    of older stars, can also have their distances determined in this way
    if account is taken of their different chemical composition fitting
    to the H-R diagram of these associations can allow distance estimates
    out to 100 kpc. All of these techniques can be checked against one
    another and their consistency verified.

    The importance of this determination of distance within our own galaxy
    is that it allows us to calibrate the distance indicators that are used
    to estimate distances outside it. The most commonly used primary
    distance indicators are two types of periodic variable stars (Cepheids
    and RR Lyrae stars) and two types of exploding stars (novae and
    supernovae). Cepheids show a correlation between their period of
    variability and their mean luminosity (the colour of the star also plays
    a part) so that if the period and magnitude are known the distance can
    in principle be calculated. Cepheids can be observed with ground-based
    telescopes out to about 5 Mpc and with the Hubble space telescope to at
    least 15 Mpc. RR Lyrae stars are variables with a well-determined
    magnitude they are too faint to be useful at large distances, but they
    allow an independent measurement of the distance to galaxies within 100
    kpc, such as the Magellanic Clouds, for comparison with Cepheids. Novae
    show a relationship between luminosity at maximum light and rate of
    magnitude decline, though not a very tight one however, they are
    brighter than Cepheids, so this method may allow distance estimates for
    more distant objects. Finally, supernovae allow distance determination
    on large scales (since they are so bright), but the method requires some
    input from theory on how they should behave as they expand. The
    advantage of using supernovae is that the derived distances are
    independent of calibration from galactic measurements the disadvantage
    is that the dependence of the supernova's behaviour on the type of star
    that formed it is not completely understood.

    The best primary distance indicators (generally Cepheids) can be used
    to calibrate mainly empirical secondary distance indicators these
    include the properties of H II regions, planetary nebulae, and
    globular clusters in external galaxies and the Tully-Fisher relation
    between the width of the 21-cm line of neutral hydrogen and the
    absolute magnitude of a spiral galaxy. These can all be used in
    conjunction with type Ia supernovae to push the distance ladder out to
    the nearest large cluster of galaxies (Virgo, at around 15--20 Mpc)
    and beyond (the next major goal is the Coma cluster at around 5 times
    farther away). Other empirical estimators such as a galaxy
    size-luminosity relation or a constant luminosity for brightest
    cluster galaxies are of uncertain value.

    The goal in all of this is to get out beyond the motions of our local
    group of galaxies and determine distances for much more distant
    objects which can reasonably be assumed to be moving along with the
    expansion of the universe in the Big Bang cosmology. Since we know
    their velocities from their redshifts, this would allow us to
    determine Hubble's constant, currently the `holy grail' of
    observational cosmology if this were known we would know the
    distances to _all_ distant galaxies directly from their recession
    velocity. Sadly different methods of this determination, using
    different steps along the distance ladder, give different results
    this leads to a commonly adopted range for H of between 50 and 100
    km/s/Mpc, with rival camps supporting different values. There are a
    number of ongoing attempts to reduce the complexity of the distance
    ladder and thus the uncertainty in H. One has been the recent (and
    continuing) use of the Hubble Space Telescope to measure Cepheid
    variables directly in the Virgo cluster, thereby eliminating several
    steps this leads to a high (80--100) value of H, although with large
    uncertainty (which should hopefully be reduced as more results
    arrive). Other groups are working on eliminating the distance ladder,
    with its large uncertainty and empirical assumptions, altogether, and
    determining the distances to distant galaxies or clusters directly,
    for example using the Sunyaev-Zeldovich effect together with X-ray
    data on distant clusters or using the time delays in gravitational
    lenses. The early results tend to support lower values of H, around
    50.


    What limits the use of the H-R diagram to measure distance (main sequence fitting), what distances is it useful for? - Astronomy

    Distances to galaxies and AGNs are important, but direct means of measuring distances may be difficult and very time-consuming. Hence the mere possibility of something like the Hubble flow cz = H0 D would be a real boon, since we could then estimate distance (to within errors caused by peculiar motion) from a single straightforward measurement. The idea is then that for "large enough" D, the Hubble velocity will overwhelm any peculiar motions and we will see a smooth, purely radial flow.

    Finding the value of H0 has been an important part of galaxy research from its inception, with the recent additional possibility of mapping systematic departures from a smooth Hubble flow. The procedure usually follows a distance ladder, in which objects of well-known properties are used to calibrate larger/brighter kinds of objects which can in turn be used to calibrate other indicators that may be seen to greater distances, until finally we have indicators that are useful into the realm of allegedly pure cosmological motion. A distance indicator must have the following attributes:

    Much of the debate over the distance scale arises from the large distances that we need to cover to be sure we are beyond the range of peculiar velocities such as Virgocentric flow. Eventually, we find that only global galaxy properties and their correlations are usable. In the ladder of distance indicators, propagation of errors becomes dominant. See Rowan-Robinson, The Cosmological Distance Ladder (Cambridge 1987), for a full discussion. Modern methods are described in Galaxy Distances and Deviations from Universal Expansion, ed. B. Madore and R.B. Tully (NATO ASI 180). We will consider the methods in the traditionql distance ladder in turn.

    Trigonometric parallax. This is useful out to a few hundred pc for individual stars if we have milliarcsecond precision, which Hipparcos delivered for tens of thousands of stars. This is the only (almost) completely foolproof technique for distances, since we know the size of the Earth's orbit well. Statistical applications can be applied to whole groups of stars, using (for example) the solar motion through the galactic disk to generate secular parallax. These still sample only a tiny region of the galaxy, and in particular do not reach to either very luminous stars or Cepheid variables (though Hipparcos delivered statistically useful parallaxes for some Cepheids).

    Cluster convergent points. For nearby clusters of appreciable angular extent (like the Hyades) perspective makes the proper motions of individual stars not parallel, but directed toward a point in the sky parallel to the cluster's mean motion relative to the Sun. This gives the angle between our line of sight and the cluster's motion, and thus what fraction of the cluster's space motion is seen as proper motion and what as radial velocity. Measuring the average radial velocity then allows a distance determination, as the distance for which the radial velocity and proper motion are consistent with the angle between line-of-sight and space motion. This lets us calibrate absolute magnitudes for all the cluster members - including upper main-sequence and red giant stars. The classic example is the Hyades cluster, seen here using Hipparcos proper motions from Perryman et al. (1998 A&A 331, 81):

    Main-sequence fitting. For even more distant star clusters (that might contain OB stars or Cepheids, for example) we estimate distances by assuming that main-sequence stars of identical spectral type have the same absolute magnitude. This amounts to, for example, shifting the main-sequence location of a cluster until it coincides with that of some reference cluster like the Hyades. The reddening must be reasonably well determined to make this work. This can be done for systems as distant as the Magellanic clouds, which is the easiest place to calibrate Cepheids. For this purpose, each Magellanic Cloud can be thought of as a giant cluster.

    Cepheid variables. These are supergiants in the instability strip on the H-R diagram, undergoing regular pulsations that are expressed by luminosity and temperature variations. Their high optical luminosity makes them easy to pick out (though, being rather massive stars, they don't occur in elliptical galaxies). Recent data give a period-luminosity relation of the form <MV> = -3.53 log P + 2.13 (<B0> - <V0>) + &phi where &phi

    -2.25 is a zero point. P is in days here, and the brackets denote averaging over a cycle of the light curve. The relations for the SMC and LMC are shown by Mathewson, Ford and Visvanathan 1986 (ApJ 301, 664) as follows, from their Fig. 3 (courtesy of the AAS):

    To use Cepheids effectively, one must deal with the following points:

    Cepheids have been measured from the ground throughout the Local Group (which Hubble could do - the astronomer, not the telescope), and can be detected in the M81 and Sculptor groups, and more recently in M101 at a distance of 7 Mpc (Cook, Aaronson, and Illingworth 1986 ApJLett 301, L45), and even an amazing detection of a couple in the late-type Virgo spiral NGC 4751, when the seeing and stellar crowding all worked together (Pierce et al. 1994 BAAS 26, 1411). Note that it is traditional to quote the distance modulus m-M = 5 log D - 5 rather than the distance itself in many publications on the distance scale - for example, the DM of the LMC is close to 18.5. To date, the HST key project on the distance scale has reported detections of Cepheids to 25 Mpc, and it can in principle go well beyond Virgo. A real shame there aren't any spirals which can be shown to live in the Coma core. The best-known report of this work was for NGC 4321=M100 in Virgo by Ferrarese et al (1996 ApJ 464, 568), see also Freedman et al 1994 (Nature 371, 757). The project, using Cepheids to calibrate secondary distance indicators through common galaxy and group membership, was described by Kennicutt, Mould, and Freedman 1995 (AJ 110, 1476). Some of their Cepheid light curves are shown below -- for M100 alone, they already detect more Cepheids than are known in the LMC, so the LMC calibration becomes a weak link. The project has gotten all its data, and a recent summary (Mould et al. 2000 ApJ 529, 7867) gives a grand average value of H0= 71 ± 6 km/s Mpc as based on HST Cepheid distances to 25 galaxies, in ridiculously close agreement with results of fitting the WMAP power spectrum of CMB fluctuation.

    This plot colects the Key Project Cepheid distances. Note the large peculiar motions within Virgo the one galaxy lying right on the mean line at that distance is NGC 7331, almost opposite Virgo in the sky.

    RR Lyrae stars. These are lower-luminosity stars, where the instability strip crosses the horizontal branch. They may appear on cluster H-R diagrams by omission in the "RR Lyrae gap", since variables are usually not plotted. The absolute magnitude of all RR Lyrae variables seems to be nearly constant at <MV = 0.75 ± 0.1. There may be some poorly-determined metallicity dependence. No period determination is needed here, just the determination that a star is of this type (which means you get the period anyway). Problems are: RR Lyraes are intrinsically about 2 magnitudes fainter than Cepheids, and similarly difficult to calibrate only a couple are close enough for a parallax measurement with Hipparcos, so statistical parallaxes are still important.

    Automated image detection has proven fruitful in finding these stars throughout the Local Group, even before HST. Saha and Hoessel (1990, AJ 99, 97) report finding 151 in the small elliptical NGC 185, as seen in their Fig. 5 courtesy of the AAS:

    Most luminous (blue/red) stars. There is an empirical relation between a galaxy's absolute magnitude and that of the brightest individual stars - this amounts to assuming a constant form for the upper end of the luminosity function and letting statistics operate. Conveniently, these are the first stars to be resolved. Possible problems: confusion with compact clusters (as in 30 Doradus), unknown variation with galaxy type.

    All of the stellar indicators listed above for other galaxies are easiest to use in systems with substantial population I components, and in rather open galaxies so that crowding is reduced. One therefore tries to deal with a galaxy's outer regions, and rather late-type galaxies (see the Sandage and Bedke atlas for illustrations of resolution into stars for such galaxies, which was the point of their producing this volume). There are also several temporary or indirect stellar distance indicators:

    Novae. There is a relation between absolute magnitude and fading rate for novae, as best we can tell from the Local Group. They can easily be picked out as transient H&alpha sources, and two seem to have been detected in this way as far away as M87 (Pritchet and van den Bergh 1987 ApJLett 288, L41) as well, data series sufficient to find Cepheids may find them as continuum sources. Ciardullo et al. (1990 ApJ 356, 472) discuss 11 well-observed novae in M31. The relation between fading rate and absolute B magnitude is only partially followed by H&alpha, so that a combination of H&alpha discovery, continuum observations near maximum, and H&alpha observations to faint levels seems the most effective approach. Faint continuum measurements are impossible because the nova blends into the overall stellar background. This technique may be used for population II systems.

    Planetary nebulae. These can also trace the population II components, since they can be produced by old stars. Their usefulness as a distance indicator relies on the fact that their luminosity function appears to be invariant, and is easily understood from stellar evolution (Jacoby 1989 ApJ 339, 39). Large numbers of planetaries can be detected in nearby galaxies by using narrow-band images around the [O III] &lambda5007 line, which is extremely strong in planetaries but not most H II regions. Sufficient planetaries have been detected for estimates of the distance to Virgo (Jacoby et al. 1990, ApJ 356, 332). The fitting technique for an incomplete luminosity function is illustrated by Fig. 3 of Ciardullo et al. 1989 (ApJ 339, 53) for M31 (courtesy the AAS):

    Supernovae. Type I (population II) supernovae can be recognized (and divided into subgroups a,b, and maybe c) based on their spectra and light curves. Available evidence is consistent with peak luminosity being roughly fixed for at at least type Ia (but watch out, new understanding of subluminous ones like 1987A may change this). Supernovae can be seen a long way off (like z=1.7 if you're looking hard), so they would make wonderful distance indicators if (1) we really know this peak luminosity, (2) it really is constant, and (3) we can account for dust obscuration (hello IR). The peak brightness is given by supernova models, but SN in galaxies nearby enough for checking are rare. For cosmologically distant SN the rate of decay is stretched by the dilation factor (1+z). These are the objects which first provided strong evidence for an acceleration of the Hubble expansion (perhaps to be identified with Einstein's cosmological constant).

    A direct measure of distance for expanding or pulsating objects is in principle possible via the Baade-Wesselink method. One measures the change in bolometric luminosity and the integral (change in relative) radial velocity over this time. Then, applying either a blackbody approximation or a more realistic spectrum, the angular size difference between two epochs is derived, which gives a distance by requiring it to be consistent with the radius change from radial velocities. Problems center around just how the observed velocity is weighted across the photosphere and whether the opacity structure changes between epochs.

    Surface-brightness fluctuations. Well before a galaxy is truly resolved into even its brightest stars, the image will be mottled by statistical fluctuations for example, if the surface brightness is such that there are 100 red giants per seeing disk, one expects 10% Poisson fluctuations. These may be distinguished from photon noise because these fluctuations have the same spatial power spectrum as the seeing disk (or more generally the system response, i.e. PSF), not white noise (Tonry and Schneider 1988 AJ 96, 807). As a sample, this image shows M32 HST data resampled as if seen at progressively greater distances (each step increasing by a factor 2). The technique is surprisingly powerful as long as one can compare galaxies with similar stellar populations - basically one must assume a characteristic (well-defined) mean luminosity for stars. The method has already been extended to Virgo, giving excellent agreement with planetary-nebula determinations and first hints as to which galaxies are on the near and far sides (Tonry et al. 1989 ApJ 346, L57).

    H II regions. By necessity these require active star formation and OB stars. They are luminous and measurable to very large distances. The first approach (Sandage and Tammann 1974 ApJ 190, 525) was to assume that the diameter of the brightest H II regions is related to galaxy absolute magnitude. However, Kennicutt 1979 (ApJ 228, 704) showed that seeing effects compromise visual and isophotal diameters so strongly that this cannot work as a distance indicator. More recent work has focussed on emission-line luminosities, assuming in essence that the more star formation, the brighter the galaxy, and statistically the brighter the biggest few H II regions are. This might be considered a variant on the brightest blue stars method.

    The emission-line widths have also been considered, with a claim by Terlevich and Melnick (1981 MNRAS 195, 839) that an L - &sigma 4 relation holds for supergiant H II regions that is that they are bound by a gravitational mass propertional to (ionizing-UV) starlight intensity. This would be useful in the same way as the Tully-Fisher relation or the analogous relation for elliptical galaxies. However, further work (Gallagher and Hunter 1983 ApJ 274, 141 Roy et al. 1986 ApJ 300, 624) has clouded the picture for more extended samples, the correlation is much less striking, and the gas motions are largely supersonic, driven by stellar winds and SN rather than being gravitationally produced.

    A refinement, including a second parameter related to surface brightness, has been used by the Seven Samurai to compile a large set of redshift-independent distances for mapping the local velocity field (Dressler et al. 1987 ApJ 313, 42 data in Faber et al. 1989 ApJSuppl 69, 763).

    Global galaxy properties: These must be used for more and more distant systems, requiring extensive calibration from the techniques above. Specific indicators include:

    Corrections to observed magnitudes must be applied for (1) measuring aperture size (2) passband redshifting, the so-called K-correction (3) the redshifting of both photon energy and arrival rate, and (4) any assumed evolution - at least passive evolution of the stellar population must be taking place.

    "Exotic" Distance Indicators

    All of the above methods rely on a straightforward application of the inverse-square law or the angular diameter-distance relation. There is also a range of techniques that use more involved or indirect combinations of observables. Some examples are:

    The Hubble time: for simple big-bang models, ages of objects (stars, radioactive nuclei) set bounds on H0. The age of the universe is of order the Hubble time &tauH =1/H0, to within a factor of order unity depending on the deceleration history of the expansion. For H0=50 km/s Mpc, &tauH= 2 x10 10 years for 100 km/s Mpc, 10 10 years. This must be greater than the age determined from geological and stellar-evolutionary timescales, nuclear isotopic clocks like 235 U/ 238 U, and consistent with the dynamical status of galaxies and clusters. The small amount of evolution observed in elliptical galaxies to about z=1 favors smaller H0 in simple models (Hamilton 1985 ApJ 297, 371). One should beware subtly circular arguments - globular-cluster ages were beautifully consistent with H0=50 but had been calculated by people who know the answer they expected to get and tuned a few parametere accordingly. There was, for several years, a widely-publicized discrepancy between &tauH from HST Cepheid results and globular-cluster ages, but recent calculations of effects of mixing on stellar evolution and the Hipparcos distance revisions to Cepheids both go in the direction of reducing the problem.

    Gravitational lenses: we need to know the lens mass (for example through the cluster velocity dispersion) and the time delay between images (say from QSO variability). Then we can derive the lens proper distance. The differential time delay may be the hardest part here, especially in the presence of microlensing.

    Light echoes: this has given an independent distance to the LMC, by using the time of illumination of a circumstellar ring (seen from IUE, Panagia et al. 1991 ApJL 380, L23) to give an absolute front-back size, and the angular size of the ring (from HST) for a transverse measurement. This example was done by, for example, Gould (1995 ApJ 452, 189). A similar approach can also be used (with polarization to tell where the ring is) for distant supernovae (Sparks 1994 ApJ 433, 19).

    Emission/absorption measures: here one uses the different dependences of emission and absorption on density versus path length. An example is the IGM in clusters seen in emission via X-rays and in absorption (more precisely upward scattering) against the microwave background (the Sunyaev-Zeldovich effect). This works because on astrophysical grounds we expect the hot gas to be smoothly distributed through the cluster potential clumping would make this more useful for probing structure than distance. So far, this isn't accurate enough for use as more than a consistency argument because the absorption is very weak, but in principle is free of many of the assumptions of other methods (10 7 K gas should be very smoothly distributed). This technique for detecting hot cluster atmospheres is almost equally sensitive for all cluster redshifts z>0.5 because it's an area measure, so surveys are in progress to find high-redshift clusters as S-Z spots.

    Proper motions: a maser in a star-forming region should be detectable with the VLBA all the way to Virgo. Its proper motion due to the rotation of a typical spiral should be of order 3 microarcseconds per year, which, it has been claimed, should be measurable in a decade or so. One then determines the distance at which this matches the disk rotational velocity at the appropriate radius. The most distant actual application so far has been to masers in the nuclear disk of NGC 4258 (Herrnstein et al. 1999 Nature 400, 539).

    Distances to nearby galaxies are not in serious dispute, but the role of peculiar velocity on these scales is. Some useful distances are (in Mpc)

    This means that H(Virgo) is about 60 km/s Mpc, but is this value globally applicable? Two major camps long existed: Sandage at 50 (the "long" distance scale) and de Vaucouleurs at 100 (the "short" scale). Data occasionally drown in invective on this issue. Doing a systematic error treatment, Hanes 1981 (MNRAS) and Rowan-Robinson in his book found that 80 km/s Mpc satisfies all the error bars and is what the IR T-F relation gives at large distances. This is essentially the Key Project global value as well, with the CMBR global fitting giving a value of 71. Maybe the compromise value of 75 that many people have used was actually more than fence-sitting.

    Aaronson, Huchra, and Mould found evidence for systematic departures from the Hubble flow toward Virgo, so that the redshift-distance relation is nonlinear, and in some places double or triple-valued.

    A first indication of such disturbances was the study by Rubin and Ford (1987 AJ 81, 719) of 96 Sc I galaxies, which showed an asymmetry on the sky in redshift-magnitude space such that we were likely to be moving at about 500 km/s with respect to the centroid of these galaxies. This eventually turned into an industry, with the 7 Samurai announcing a "Great Attractor" off in Centaurus (l=299°, b=-11°) that messes up the velocity field out to about 3000 km/s (Lynden-Bell et al. 1987 ApJLett 313, L37). We are approaching this mass at about 700 km/s this is actually consistent with the Rubin and Ford result if Virgo infall is included. Lauer and Postman (1994 ApJ 425, 418) find yet a different motion relative to 119 Abell clusters at z < 0.05 - 561 ± 284 km/s toward l=220 °, b= -28 °, yet a different direction and certainly an unexpected magnitude. A somewhat different motion is derived with respect to the microwave background, which is the grandest average we can find - the final COBE data set gives 368 km/s toward l=264.3, b=48.1 with independent analysis of the FIRAS and DMR instruments in good agreement (Lineweaver et al. 1996 ApJ 470, 38). This has just been refined with WMAP to l=263.8, b=48.2 (Bennett et al. ApJ submitted, astro-ph/0302207). At some point one wonders about the the scale on which the cosmological principle is adequately realized. This means that the Grail itself, H0, must be sought at even larger distances than thought before (to the extent that it would be useful in itself if the Hubble flow is really lumpy, though the tightness of the Hubble diagram for standard candles suggests that it isn't all that bad).

    There are also isolated instances of galaxies flagrantly violating the Hubble flow. Perhaps the best is in the direction of NGC 1275. The main galaxy has v=5000 km/s, and has something that looks like a late-type spiral demonstrably in front of it but having v=8100 km/s. Images from Keel 1983 (AJ 88, 1579) isolate the foreground and background systems in H&alpha:

    while the foreground system is visible in absorption by dust in this HST image:

    This is too fast to be just free fall into a cluster core - and if there are many galaxies shooting around at 3000 km/s there should be huge scatter in the Hubble diagram. Thus there can't be many of these, but how far would we have gone wrong if we saw the spiral by itself?


    2. Hubble Space Telescope Astrometry of Polaris B

    2.1. FGS Observations and Data Analysis

    As part of an astrometric program on the trigonometric parallaxes of overtone Cepheids, we observed Polaris with the FGS system on HST. The FGSs are a set of three interferometers that, in addition to providing guiding control during imaging or spectroscopic observations, can measure precise positions of a target star and several surrounding astrometric reference stars with one FGS while the other two guide the telescope. The FGS system has been shown capable of yielding trigonometric parallaxes, in favorable cases, with better than ±0.2 mas precision (e.g., Benedict et al. 2007, hereafter B07 Soderblom et al. 2005 Benedict et al. 2011, 2017 McArthur et al. 2011 Bond et al. 2013).

    The Cepheid Polaris A, at a mean brightness (Fernie et al. 1995), is too bright to be observed with the FGS system. Because of the strong evidence that Polaris B is a physical companion at the same distance as the Cepheid (see above), we chose it instead as our astrometric target. We made FGS observations of Polaris B during two HST visits at each of five epochs between 2003 October and 2006 September (program numbers GO-9888, −10113, and −10482 PI H.E.B.), at dates close to the biannual times of maximum parallax factor. We used FGS1r for the measurements, in its wide-angle astrometric POSITION mode. There was no sign of duplicity of B in the FGS acquisition data. In addition to Polaris B, we observed a network of 10 faint background reference stars lying within of the target. Of the 10 reference stars, two were rejected because of acquisition failures, faintness, binarity, or interference from the diffraction spikes of Polaris A, and we retained eight (with magnitudes of V = 14.1–16.5) for the final solution. They are listed in Table 1.

    Table 1. Astrometric Reference Stars and Polaris B

    ID R.A. (J2000) V BV VI Sp.Type (mas yr −1 ) a (mas) b
    Decl. (J2000) (mas yr −1 ) a (mas) b
    R1 02:37:32.4 14.342 0.762 0.890 F8 V 0.9 ± 0.4 1.16 ± 0.15
    +89:20:00.1 ±0.003 ±0.007 ±0.003 −0.6 ± 0.4 1.14 ± 0.07
    R2 02:25:31.0 14.277 0.814 0.930 G2 V −7.9 ± 0.8 1.31 ± 0.17
    +89:18:09.5 ±0.003 ±0.004 ±0.004 7.0 ± 0.5 1.41 ± 0.13
    R3 02:34:04.9 16.504 0.734 0.820 F7: IV: 0.5 ± 0.8 0.28 ± 0.11
    +89:19:11.6 ±0.014 ±0.010 ±0.011 −0.7 ± 0.7 0.28 ± 0.04
    R7 c 02:30:48.2 14.147 0.825 G0 IV 5.4 ± 0.5 1.04 ± 0.35
    +89:14:30.2 ±0.003 ±0.007 0.5 ± 0.4 1.04 ± 0.13
    R8 02:25:26.6 15.304 1.116 1.237 G0 IV 9.7 ± 0.6 0.49 ± 0.16
    +89:14:26.2 ±0.015 ±0.011 ±0.009 −6.8 ± 0.5 0.49 ± 0.05
    R9 02:21:18.2 14.958 0.903 1.070 G1 IV 13.3 ± 1.0 0.76 ± 0.30
    +89:13:37.5 ±0.007 ±0.007 ±0.005 1.5 ± 0.7 0.73 ± 0.07
    R10 02:32:25.8 14.675 1.360 1.633 K5 V 35.0 ± 0.6 5.48 ± 0.70
    +89:12:09.2 ±0.004 ±0.008 ±0.007 15.6 ± 0.6 6.32 ± 0.42
    R13 02:25:58.3 15.940 1.051 1.140 G5: V: 3.5 ± 0.8 1.16 ± 0.15
    +89:12:12.9 ±0.006 ±0.020 ±0.010 −2.0 ± 0.7 1.12 ± 0.17
    B d 02:30:43.5 8.65 0.42 F3 V 41.1 ± 0.4
    +89:15:38.6 ±0.02 −13.8 ± 0.4 6.26 ± 0.24

    a Proper motions in R.A. and decl. from our astrometric solution. b Input estimated absolute parallax (top entry), and adjusted absolute parallax from astrometric solution (bottom entry). c R7 is cataloged as Polaris D, which was identified as a possible companion of Polaris by Burnham (1894), and discussed more recently by Evans et al. (2002, 2010). The latter did not detect X-ray emission from Polaris D, suggesting that it is not a young low-mass companion of the Cepheid. Our spectral type and photometry, giving an estimated distance of

    960 pc, and our measured proper motion, definitively rule out Polaris D as a physical companion of Polaris A and B. d Polaris B. V magnitude from Evans et al. (2008) and BV from the literature compilation by Turner (2005) spectral type from Turner (1977).

    Our FGS astrometric solution procedure is outlined by Bond et al. (2013), and described in detail by B07 and Nelan (2017). The first step is to correct the positional measurements from the FGS for differential velocity aberration, geometric distortion, thermally induced spacecraft drift, and telescope pointing jitter. Because of refractive elements in the FGS optical train, an additional adjustment based on the BV color of each star is applied. Moreover, as a safety precaution due to its proximity to Polaris A, Polaris B itself was observed with the F5ND neutral-density attenuator, while the much fainter reference stars were observed only with the F583W filter element. Thus it was necessary to apply "cross-filter" corrections to the positions of Polaris B relative to the reference stars the corrections are slightly dependent on location of the star in the FGS field.

    The adjusted measurements from all 10 visits were then combined using a six-parameter overlapping-plate technique that solves simultaneously for scale, translation, rotation, and proper motion and parallax of each star. Full details, including the equations of condition, are given in B07, their Section 4.1. We employed the least-squares program GAUSSFIT (Jefferys et al. 1988) for this analysis. Parallax factors are obtained from the JPL Earth orbit predictor, version DE405 (Standish 1990). Since the FGS measurements provide only the relative positions of the stars, the model requires input estimated values of the reference-star proper motions and parallaxes, in order to determine an absolute parallax of the target. These estimates (Section 2.2) were input to the model as observations with errors, which permits the model to adjust their parallaxes and proper motions (to within their specified errors) to find a global solution that minimizes the resulting .

    2.2. Reference-star Proper Motions and Parallaxes

    The initial proper-motion estimates for the reference stars were taken from the UCAC5 catalog (Zacharias et al. 2017). In order to estimate the distances to the reference stars, we employed spectral classification and photometry, and as a lower-weight criterion, their reduced proper motions. For spectral classification, we obtained digital spectra with the WIYN 3.5 m telescope and the Hydra multi-object spectrograph at the Kitt Peak National Observatory (KPNO), on the night of 2003 November 22. The classifications were accomplished through comparison with a network of MK standard stars obtained with the same spectrograph, assisted by equivalent-width measurements of lines sensitive to temperature and luminosity. The results are given in the sixth column in Table 1.

    Photometry of the reference stars in the Johnson–Kron-Cousins BVI system was obtained at KPNO on one photometric night in 2007 October (0.9 m telescope), and on three photometric nights in 2008 October (2.1 m telescope). Each star was measured between 9 and 13 individual CCD frames. The photometry was calibrated to the standard-star network of Landolt (1992), and the results are presented in Table 1. The internal errors of the photometry, tabulated in Table 1, are generally quite small, but the systematic errors are probably larger because of (a) the high airmass at which the Polaris field has to be observed, and (b) the presence of a very bright star at the center of the field, giving rise to PSF wings, diffraction spikes, and charge-bleeding columns across much of the field.

    Although Polaris itself is unreddened (e.g., Fernie 1990 Laney & Caldwell 2007), or very lightly reddened (e.g., Gauthier & Fernie 1978 find , and TKUG13 give ), it is known to lie just in front of a molecular cloud, the "Polaris Cirrus Cloud" or "Polaris Flare" (e.g., Sandage 1976 Heithausen & Thaddeus 1990 Zagury et al. 1999 Cambrésy et al. 2001 Ward-Thompson et al. 2010 Panopoulou et al. 2016, and references therein). Thus significant reddening of the reference stars is expected.

    To estimate their reddening, we compared the observed BV color of each star with the intrinsic color corresponding to its spectral type (Schmidt-Kaler 1982), from which we calculated an average . We also used the extinction map of Schlafly & Finkbeiner (2011), as implemented at the NASA/IPAC website, 7 to determine the reddening in the direction beyond Polaris. The Schlafly & Finkbeiner map gives a range of reddening values across the field covered by the reference stars of to 0.30, which is the total reddening for a hypothetical star at a very large distance. We adopted a reddening of for all of the reference stars, except for R10, the nearest one, for which we used based on its spectral type and observed BV.

    The distances to the reference stars were then estimated as follows: (1) For the four stars classified as dwarfs, we used a calibration of the visual absolute magnitude, MV, against BV and VI colors derived through polynomial fits to a large sample of nearby main-sequence stars with accurate photometry and Hipparcos or USNO parallaxes, which is described in more detail in Bond et al. (2013). This algorithm corrects for effects of metallicity. (2) For the four subgiants, we searched the Hipparcos data for all stars classified with the same spectral types that had parallaxes greater than 15 mas, and calculated their mean absolute magnitude for use in the distance estimate. For the dwarfs, our MV versus BVI calibration reproduces the known absolute magnitudes of the sample of nearby dwarfs with an rms scatter of 0.28 mag. The scatter in the subgiant MV calibrators was larger,

    0.8 mag. Our final estimated input parallaxes and their errors, based on the scatter in the MV calibrators, are given in the last column of Table 1, along with the output parallaxes given by the solution.

    2.3. Parallax and Proper Motion of Polaris B

    Our solution results in an absolute parallax of Polaris B of 6.26 ± 0.24 mas ( pc), as indicated at the bottom of Table 1. The uncertainty includes contributions from residual errors in the geometric-distortion calibration of the FGS, errors in HST pointing performance, and errors in the raw stellar position measurements. The proper-motion components for Polaris B from the FGS solution are 8 . The absolute proper motion of Polaris A determined by Hipparcos is (van Leeuwen 2007), but this includes an offset due to orbital motion in the close A–Ab pair during the relatively short astrometric mission. The long-term proper motion of A in the FK5 system, corrected to the Hipparcos frame, is , according to Wielen et al. (2000). Since the uncertainties of the individual UCAC5 proper motions used to establish the FGS reference frame are about (Zacharias et al. 2017), the agreement with the FGS results is reasonable.

    2.4. The Discrepancy with Hipparcos

    Our result for the parallax of Polaris B (6.26 ± 0.24 mas) is 1.28 mas smaller than found by Hipparcos for Polaris A (7.54 ± 0.11 mas). Is it plausible that the Hipparcos result could be in error by such a large amount?

    Hipparcos parallaxes have usually agreed with the results of HST/FGS measurements, or of other parallax techniques, to within their respective errors (e.g., Benedict et al. 2002 McArthur et al. 2011 Bond et al. 2013). However, there have been a few notable exceptions: (1) For the Pleiades cluster, Melis et al. (2014) obtained a precise cluster parallax of 7.35 ± 0.07 mas from very-long-baseline radio interferometry (VLBI) astrometry of four radio-emitting cluster members. FGS parallaxes of three other Pleiades stars gave an average absolute parallax of 7.43 ± 0.17 (random) ±0.20 (systematic) mas (Soderblom et al. 2005), in accord with the VLBI result. However, van Leeuwen (2009), based on Hipparcos astrometry of over 50 Pleiads, found a mean cluster parallax of 8.32 ± 0.13 mas, larger by 0.97 mas than the VLBI result. (2) Benedict et al. (2011) used FGS to measure a parallax of the Type II Cepheid κ Pavonis of 5.57 ± 0.28 mas the Hipparcos parallax of 6.52 ± 0.77 mas is larger by a similar 0.95 mas (although this is of lower statistical significance because of the relatively large Hipparcos uncertainty). (3) VandenBerg et al. (2014) used FGS to measure parallaxes of three halo subgiants. For two of them, the results agreed very well with Hipparcos, but for HD 84937, the Hipparcos value of 13.74 ± 0.78 mas was larger by 1.50 mas than the FGS measurement of 12.24 ± 0.20 mas. (4) Zhang et al. (2017) used VLBI astrometry to derive a parallax of 4.42 ± 0.13 mas for the semi-regular variable RT Virginis, for which the Hipparcos parallax is 7.38 ± 0.84 mas, or 2.96 mas larger.

    In summary, there are indeed isolated examples of the Hipparcos parallax measurement being shown to be anomalously too large.

    2.5. Possible Sources of Systematic Error in the FGS Parallax

    In this subsection, we comment on possible causes of a systematic error in our FGS parallax measurement for Polaris B, which could potentially explain the discordance with the Hipparcos value for the Cepheid Polaris A.

    (1) Could our input estimated parallaxes of the reference stars be systematically too low by

    1.3 mas? Omitting the star R10, which is unusually nearby, we find a mean estimated parallax of the other seven reference stars of 0.89 mas. This agrees quite well with the value of 1.0 mas for the mean parallax of field stars at V = 15, at the Galactic latitude of Polaris, recommended by Altena et al. (1995, their Figure 2) based on a statistical model of Galactic structure. Increasing our reference-star parallaxes by a mean of about 1.3 mas would give serious disagreement with the van Altena et al. model values. Moreover, it would require the reference stars to be systematically about 1.9 mag fainter in absolute magnitude than in our calibration, which appears astrophysically unlikely—it would require all of the main-sequence stars to be extreme subdwarfs, in conflict with their spectral types.

    (2) Was our ground-based CCD photometry affected by the presence of the bright Polaris A in the frames? The required sense to give agreement with Hipparcos would be that the reference stars are actually systematically brighter than indicated by our measurements. Here we have a check, because the FGS measurements provide independent estimates of the V magnitudes, based on the observed count rates and an approximate absolute calibration. Setting aside R7 and R8, which are the angularly closest of the reference stars to the very bright Polaris A, we find our measured FGS magnitudes are an average of only 0.09 mag brighter than the ground-based V magnitudes. Such an amount is likely consistent with contamination of the FGS photometric measurements by background scattered light from Polaris. (Background scattered light is not subtracted from the measured counts in the FGS reductions.)

    (3) Did scattered light or dark counts affect the FGS astrometry? The Polaris astrometric field is unique among those measured with the HST/FGS system, because of the presence of the extremely bright Polaris A near the center of the field. In addition to the magnitude measurements noted in the previous paragraph, we indeed see evidence of scattered light across the field. This shows up as enhanced count rates detected as the instantaneous FGS field of view is slewed across the blank sky from one reference star to the next. However, this background light is faint, incoherent with the light from the FGS target stars, and displays no significant gradient over the scale length of FGS interferometric measurements. Thus, the background only slightly reduces the amplitude of the interference fringes, without significantly displacing the measured positions. This is the same effect that dark counts from the photomultiplier tubes have on the fringe amplitude of faint stars ( ), but likewise without systematically affecting their measured positions. To verify these conclusions, we conducted extensive tests whereby each reference star, as well as pairs and triplets of reference stars, were removed from the solution to reveal any unusually affected individual exposures. Removing reference stars increased the errors in the parallax measurements but did not systematically change the parallax of Polaris B by more than 0.3 mas. We therefore conclude that the FGS measurement of the Polaris B parallax was not significantly affected by the presence of Polaris A.

    (4) What evidence does Gaia provide? The recent first Gaia data release (DR1 Gaia Collaboration et al. 2016a, 2016b) provides an additional test of our results. Positions of Polaris B and the FGS reference stars were tabulated in DR1, but none of them are contained in the Tycho-Gaia Astrometric Solution (TGAS), and thus none have as yet a Gaia-based parallax or proper motion. (Polaris A was also not included in DR1 or TGAS, as it is too bright for the standard Gaia pipeline processing.) However, we used the epoch 2015.0 Gaia positions for the reference stars and Polaris B to simulate an additional FGS observation set, and then combined them with the rest of our data. We found excellent agreement of the FGS astrometry with the Gaia catalog positions (to better than 1 mas), but resulting in an even slightly smaller parallax for Polaris B of 5.90 ± 0.29 mas. Since we note that DR1 flags the positions of Polaris B and the reference stars as being based upon a "Galactic Bayesian prior for parallax and proper motion relaxed by a factor of ten," we decided not to include the Gaia measurement in our final solution. Nonetheless, the excellent agreement of the FGS and Gaia DR1 astrometry strengthens our conclusion that our measurements have not been contaminated by the presence of Polaris A.


    NGC 7635 (Bubble nebula)

    1.0-3.5 mK. Our data result from two differentexperiments performed, calibrated, and analyzed in similar ways. A C IIsurvey was made at the 3.5 cm wavelength to obtain accurate measurementsof carbon radio recombination lines. When combined with atomic (C I) andmolecular (CO) data, these measurements will constrain the composition,structure, kinematics, and physical properties of the photodissociationregions that lie on the edges of H II regions. A second survey was madeat the 3.5 cm wavelength to determine the abundance of 3He inthe interstellar medium of the Milky Way. Together with measurements ofthe 3He+ hyperfine line, we get high-precision RRLparameters for H, 4He, and C. Here we discuss significantimprovements in these data with both longer integrations and newlyobserved sources.

    149deg(Region 1) and four pulsars towards l

    113deg (Region 2)lie behind HII regions which seriously affect the pulsar rotationmeasures. The rotation measure of PSR J2337+6151 seems to be affected byits passage through the supernova remnant G114.3+0.3. For Region 1, weare able to constrain the random component of the magnetic field to 5.7mu G. For the large-scale component of the Galactic magnetic field wedetermine a field strength of 1.7+/-1.0 mu G. This average field isconstant on Galactic scales lying within the Galactic longitude range of85deg

    3.7×10-6 ergs s -1Å-1 cm-2 sr-1 near 8300 Åand with an ERE to scattered light band integrated intensity ratio,I(ERE)/I(sca), of about 0.7. At farther distances, approaching thebroad, bright H II region, the ERE band and peak intensity shift towardlonger wavelengths, while the ERE band-integrated intensity, I(ERE),diminishes and, eventually, vanishes at the inner edge of this H IIregion. The radial variation of I(ERE) and I(ERE)/I(sca) does not matchthat of the optical depth of the model derived for the dust lane. Bycontrast, the radial variation of I(ERE), I(ERE)/I(sca) and of the EREspectral domain seems to depend strongly on the strength and hardness ofthe illuminating radiation field. In fact, I(ERE) and I(ERE)/I(sca)diminish and the ERE band shifts toward longer wavelengths when both thetotal integrated Lyman continuum photon rate,Q(H0)TOT, and the characteristic effectivetemperature, Teff, of the illuminating OB stars increase.Q(H0)TOT and Teff are estimated fromthe extinction-corrected Hα (λ=6563 Å) line intensityand line intensity ratios [N II] (λ6583)/Hα and [SII](λλ6716+6731)/Hα, respectively, and areconsistent with model and observed values typical of OB associations.Unfortunately, we do not have data shortward of 5300 Å, so thatthe census of the UV/optical flux is incomplete. The complex radialvariation of the ERE peak intensity and peak wavelength of I(ERE) andI(ERE)/I(sca) with optical depth and strength of the UV/opticalradiation field is reproduced in a consistent way through thetheoretical interpretation of the photophysics of the ERE carrier bySmith & Witt, which attributes a key role to the experimentallyestablished recognition that photoionization quenches the luminescenceof nanoparticles. When examined within the context of ERE observationsin the diffuse interstellar medium (ISM) of our Galaxy and in a varietyof other dusty environments, such as reflection nebulae, planetarynebulae, and the Orion Nebula, we conclude that the ERE photonconversion efficiency in NGC 4826 is as high as found elsewhere but thatthe size of the actively luminescing nanoparticles in NGC 4826 is abouttwice as large as those thought to exist in the diffuse ISM of ourGalaxy.

    15deg2 between l=108deg and 113° was similarlysurveyed in 13CO. In the region covered in both isotopicspecies we find at least seven GMCs with masses on the order of105 Msolar. An intensity-weighted radius gives amore meaningful measure of cloud size than the simple geometrical areaand is best used to estimate the virial mass. The ratio of total cloudluminosities in CO and 13CO, S12/S13,ranges from 6 to 10, with a mean of 8.5. The distribution of moleculargas is very similar in CO and 13CO, and thevelocity-integrated intensities at each point are closely correlated. Inthe (l,v)-diagram the Perseus arm is kinematically separated from thelocal arm by an interarm gap that is nearly free of CO the contrast inmolecular gas surface density between the Perseus arm and the interarmgap is apparently at least 20.

    1.2x10^ <-5>ergs s^ <-1>cm ^ <-2>sr ^ <-1>, is roughly one-third of the scattered lightintensity, consistent with recent color measurements of diffuse Galacticlight. The peak of the cirrus ERE ( lambda 0

    6000 A) is shifted towardshort (bluer) wavelengths compared with the ERE in sources excited byintense ultraviolet radiation, such as H II regions ( lambda0

    8000 A) such a trend is seen in laboratory experiments onhydrogenated amorphous carbon films.


    Cables used in Mines: Distribution, Installation and Cable Junctions (With Diagram)

    Electricity is used for many purposes at many places in any mine, both underground and at the surface. The electrical power required is obtained either from a generating station at the colliery or, more usually, from the local electricity supply, through a substation.

    It is a known fact that cables used underground at collieries have to withstand unfavorable conditions, being exposed to falls of roof, dampness and other potential causes of damage.

    Mining cables must therefore be robustly made to withstand the rough use they receive. Further, constant maintenance is required to ensure their safety and reliability. In fact, reliable and robust cables are most essential for efficient coal production.

    Moreover, these mining cables should conform to the earthing regulations, namely, that the conductance of the earthing conductor should be at least 50 per cent of that of one of the power conductors.

    In the mines, for the main high and medium voltage distribution lines, PVC/XLP insulated cables with metric dimensions are now used. Previous to the introduction of metric cable size, the same cables in inch sizes were used. In fact, the inch or imperial size cables are still in use. Also, before the PVC insulated cables were used, the most commonly used cable was the paper-insulated lead sheathed type.

    Considerable amounts of this type of cable are still in use. Cables having from two to four core or conductors are available. For three phase a.c. distribution, three core cables are normally used, one core for each phase of the supply system.

    The make-up of the cores is as follows:

    (a) Plain copper wires-stranded conductor.

    (b) Pre-formed solid aluminium rod – Solid conductor.

    (c) Plain Aluminium wires – Stranded conduc­tor.

    The cross section of a conductor is made of a sector of a circle. The individual cores are insulated by a covering of coloured PVC insulating compound, the colours of the three power cores being red, yellow and blue. When four core cables are used, the fourth core is the neutral and coloured with black insulating compound.

    The conductors of the cable are laid up together in a spiral. Any gaps between them may be filled out with worming to give a uniform circular section. The assembled conductors are usually bound together with a layer of tape.

    The laid up cable is covered by a bedding, i.e. sheath of extruded PVC to prevent moisture from getting in. Cables available can be of single-armoured or double-armoured type. Each layer of armour consists of galvanized steel wires laid spirally along the cable.

    With double armoured cable, a seperator of compounded fibrous tape separates the two layers of armour, and the galvanized wires are spiralled in opposite directions. The armoring forms the earth conductor of the cable, and so it is important from earthing point of view.

    Paper Insulated Cable:

    The conductors of paper insulated cables are covered with layers of paper tape. They are then laid up with paper or jute worming and bound in more paper tape. The laid up cable is impregnated with a non-draining insulating compound.

    This is then enclosed in an extruded lead sheath which is covered with a layer of compounded fibrous tape. This type of cable may have a single or double armour over the lead sheath, the armour being covered overall by an extruded PVC sheath.

    Several methods of installation are used at the surface of the mine. The method of installation of course depends upon conditions in a particular colliery.

    The methods generally are:

    (a) Suspension:

    Suspended from a centenary wire or wall hooks. Raw hides or lead braided cable suspenders are usually used for this purpose.

    (b) Cleats:

    Cleat fixing is most commonly used where the cable is required to run along the side of a building.

    (c) Duct:

    A duct is made by digging a trench and lining it with bricks or concrete, the cable is fastened to the wall of the duct by brackets or cleats.

    (d) Wall Brackets:

    The cable rests in brackets bolted to the wall. This type of installation is normally used when the cable runs along a wall inside a building.

    (e) Trench:

    The cable trench should be of adequate depth taking account of the operating voltage of the cable, and the site conditions. The cable should be laid in a bed of sand in the trench- bottom, and then covered with sand. Interlocking cable tiles should then be bedded on to the sand so as to provide a continuous cover over the length of buried cable.

    The cable tiles should then be covered with earth free from stones, foreign objects etc. then the trench is backfilled. Finally cable trench “Marker Posts” should be erected to identify the cable trench route.

    (f) Shaft Installation:

    The normal method of securing a cable vertically in the shaft is to clamp it at regular intervals by means of wooden cleats. Wooden cleats are obtainable in lengths from 2 ft. to 6 ft. The choice of cleat of course depends upon the load it has to carry.

    Boring the Cleat:

    Cleats are bored individually to suit the cable being installed thereby ensuring that they obtain a very firm grip. The method of boring the cleat is to clamp the two halves together with a 6.35 mm (1/4 inch) board sandwiched between them.

    A hole is then drilled through the cleat to the same diameter as the cable over the outer wire armour i.e. omitting the overall serving. When the boring is complete, the board is removed so that the cleat has a 6.35 mm. nip on the cable when tightened correctly.

    Single Point Suspension:

    An alternative method of installation in a shaft is to suspend the cable from a single point at the top of the shaft. A suspension cone is used. At the point from which it is to be suspended the cable is provided with quadruple armouring.

    The cable in-fact is suspended by two layers of armouring doubled over and fitted into the cone. When the cone is assembled the cavity at the top is filled with compound. The suspension core is fastened to the top of the shaft by heavy chains. This method is only suitable for comparatively shallow shafts and is a method not frequently adopted.

    Lowering the Cable:

    Normal method of lowering the cable into the shaft is to install the drum in a cage and to lay out the cable as the cage is lowered. The cable is anchored at the shaft top and cleared as the cage gradually descends. If the drum is too large to go into the cage, a platform is sometimes built underneath to accommodate the cable drum and the men would accompany it.

    An alternative method of lowering the cable is to lash it to a wire rope so that the cable can be controlled from the top of the shaft. The cable is usually lashed to the rope at approximately ten foot intervals. When the cable has been lowered, a number of lashings at the top are cut, and this part of the cable is secured by cleats.

    Work then proceeds down the cable. At each step sufficient lashings are cut to enable a cleat to be installed. The cleat is then secured before more lashings are cut.

    Installation Underground:

    Close to the pit bottom, cleats on brackets may be used to secure cables to walls, but in roadways and gates, the usual method of installation is to suspend the cables from bars or arches. Rawhide or lead braid suspenders, such as those with catenary wires, are commonly used underground. Canvas or mild steel suspenders are also in use.

    The cable is suspended as high as possible over the roadway so the chance of it being damaged by activity below is minimised. The cable suspenders are usually designed to break in the event of a serious fall of roof, so that the cable will come down with the roof. In this way, the risk of damage to the cables is minimised.

    The cable must not be drawn tight at any point. Slackness is necessary throughout its length to accommodate roof movements.

    The length of cable which can be taken underground in one piece is limited by either:

    (1) The size of cable drum which can be lowered down the shaft and transported in bye or

    (2) The amount of cable which can be coiled and which is necessary to take the electricity supply from the pit bottom, and therefore, have to consist of lengths of cable joined together by means of cable coupler or junction (joint) box. Both methods result in a satisfactory joint when filled with compound.

    Cable Couplers:

    A cable coupler is in two identical halves, one half fitted to the end of each of the cables to be joined. Each half of the coupler has a contact tube for each cable conductor. When the cables are in place the two halves of the cable are brought together, and contact pins are inserted into the contact tubes to complete the connections. The halves are then bolted together to make a flameproof joint as shown in Fig. 15.2.

    If it becomes necessary to part the cable again, the two halves of the coupler are unbolted and drawn apart. However all the work of assembling the coupler halves to the cables is done at the surface. Each cable is taken underground with the couplers attached.

    Junction Box:

    When a junction box is used, each conductor of the cable is joined to the corresponding conductor of the other cable by means of an individual ferrule or connector. When the junction is complete, the box is filled with compound. Once the junction box has been filled, it is difficult to part cables again, as their operation involves melting the compound and draining it from the box in order to free the connectors. All the work of assembling a junction box has to be done underground at or very near the place where it is to be installed and as such, junction boxes are now less commonly used than cable couplers.

    Connecting a Cable to a Cable Coupler:

    A typical sequence of operations for making up a cable coupler is as follows:

    (1) Preparing the Cables:

    The length of serving, armouring, bedding and conductor insulation which are removed from the end of the cable depend upon the maker of coupler and can be found from the maker’s instructions. Before armour is removed, the armour clamp is passed along the cable. When removing the armour, do not cut right through with a hack-saw, as it will then be difficult to avoid damaging the bedding.

    The correct procedure is to cut part of the way through the strands and then to break them off by bending them to and fro. When the cable has been cut back, the exposed armouring must be cleaned until it is bright, and if the cable has a lead sheath, this also must be cleaned thoroughly.

    (2) Fitting the Cable Gland:

    The ends of the armoring’s are expanded so that the inner core gland, complete with gland bolts, can be inserted beneath it. If there are two layers of armouring an inter armour core is inserted between the two layers. The armour clamp, (which was put on before cutting the armour) is drawn forward over the expanded armour and on both gland bolts, the bolts are then tightened to secure the armour in the gland. If the cable has a lead sheath the gland should be packed with lead wool in accordance with the manufacturer’s instruction.

    (3) Fitting the contact tubes and interior insulator moulding:

    The insulation of the individual conductors is now cut back to the prescribed length. The insulator steel support pillars are fitted to the inner core gland and the interior-insulator moulding complete with contact tubes is offered up to the support pillars, and this enables the core lengths to be checked.

    If correct, the contact tubes can now be fitted to the cable cores in the case of aluminium conductor cores these may be soldered (specially in inert gas) or crimped by the compression tool in accordance with the manufacturers’ instructions.

    In the case of copper conductor cores, these may be soldered or fixed by grub screws. After fixing the cores in the contact tubes the interior insulator moulding must be fitted to the tubes and secured to the support pillars.

    (4) Fitting the Coupler Body:

    The coupler body may now be fitted over the interior insulator and, to be bolted in position, check the F.L.P. gap to ensure that it is flameproof.

    (5) Filling the Coupler Case:

    The filler and vent plugs are removed, and insulating compound is poured in. With PVC cables, hot filling compound (with a temperature not exceeding 135°C) or a cold pouring compound is used in order to avoid melting the cable insulation. The compound may contract as it sets and needs to be topped up. When the compound has set, the plugs are replaced.

    When a coupler has been assembled and the compound has set hard, insulation resistance between each pair of conductors, and between each conductor and the coupler case, is tested with a suitable tester, like Megger or Metro-ohm.

    When both ends of the cable have been prepared, the continuity of each conductor through the cable is tested with a continuity tester, to ensure that the internal connections are secured and adequate.

    It is particularly important to test continuity between the cases of two couplers to ensure that the earth conductor conform with earthing regulations, namely that the conductivity of the earth conductor is at least 50 per cent of that of a power conductor.

    If the earth conductor is provided by the cable armouring, then earth continuity will depend upon how securely the armouring has been clamped by the cable gland. It is important, when testing such a cable, to measure earth continuity between the cases of the cable couplers so that the electrical connections between the armour glands and the armouring are tested correctly.

    When a coupler has been tested it is wrapped tightly in hessians or plastic sheets, and the cable-end is lashed to a staple on the drum. It is good practice to bolt a blanking plate over the end of the coupler to protect the flange of the flameproof point. While the cable is in storage it should be kept as dry as possible to prevent moisture from getting into the insulation.

    Making up a Junction Box:

    The sequence of operations for making up a junction box is as follows:

    If conditions permit, the box is first bolted in the position in which it is to be installed i.e. on a brick pillar, or in an inset. If the position is hard to reach, the box may be made below or alongside its final position and lilted into place when complete.

    (2) Preparing the Cable:

    The method of preparing the cables is similar to that for a cable coupler.

    The armour clamps and .glands are similar to those used with cable coupler. It is usual to bolt down the clamps before beginning work on the internal connections.

    (4) Making Electrical Connections:

    The insulation of the individual conductors is cut back to the required dimensions, and the remaining insulations strengthened by wrapping insulation tape around them. The ends of the conductors are shaped to a circular section, if necessary. The ferrule or connections are now fitted to the ends of the conductors, and their grub screws are tightened. The entire joint is then bound with insulation tape.

    In some types of boxes, the connection are bolted to wooden or porcelain bases. In other types, the ferrules are unsupported, but the cable conductors are held apart by insulating spreaders. Some makers require that connections should be staggered inside the box. The requirement will be anticipated by the dimensions given for the individual conductors when the cable is prepared.

    Before the box is closed, the insulation resistance between each pair of conductors and between each conductor and the box must be tested with a suitable insulation resistance tester. A similar test from the unconnected end of one of the cables is required after the box has been filled.

    The cover is now bolted on. The joints between the cover and the body of the box should be tested with a feeler gauge to ensure that they are flameproof. If an earth board is provided, ensure that it is fitted securely and with good electrical contacts.

    (8) Filling with Compound:

    The filling plugs and the vent plugs are removed and the box filled with compound. As the compound sets and contracts, it may be necessary to top it up. When the box has been filled, the plugs are replaced. If the junction box is underground, or in a shaft, the compound cannot be heated near the actual site of the box.

    If hot pouring compound is to be used, it must be heated on the surface and carried in an insulated bracket to the place where it is to be filled. The minimum pouring temperature for many compounds is around 150°C. If the junction box is far away underground, and needs a long journey to reach it, then it may not be possible to keep the compound hot long enough to be poured in the junction box when at last it is reached.

    In such cases, and where it is impracticable to use hot compound, it is advisable that the box be filled with a cold pouring compound. In fact a cold pouring compound is made by mixing a hardener into a bituminous oil. As soon as the two constituents are mixed, the compound take up to 24 hours to set hard.

    The compound can, of course, be mixed underground besides the box. In most practical cases this type of cold pouring compound has been found very much useful. To fill with cold pouring compound, at first pour the bituminous oil into a clean container and then add the hardener to it. The mixture must be stirred vigorously until the two constituents are thoroughly blended, so that no sediment remains.

    The compound should be poured into the box without delay, and the filling plugs be replaced. As soon as the joint has been filled, any amount of the mixture left in the bucket should be cleaned as compounds left cannot be removed once they are allowed to set.

    Installing Cable Couplers and Junction Boxes:

    Junction boxes used underground, are usually mounted on brick pillars, or in insets cut into the side of a roadway. Cables are usually attached to the wall by cleats near where they enter the junction boxes. Plenty of slack is left, so that in the event of a roof fall which brings down the cable, as little strain as possible is placed directly upon the box.

    Cable couplers, and sometimes junction boxes are suspended from the roof by cradles. If there is a roof fall, the coupler or box comes down with the cable. Cable joints are rarely made in shafts, but when they are, the box is usually placed in an inset in the side of the shaft. Some types of junction boxes are designed to be bolted vertically to the side of the shaft.

    5. Types of Flexible Cables in Mines:

    Flexible cables used in the electrical system of a mine fall into two main categories – trailing cables and pliable wire armour cables.

    (1) Trailing Cables:

    The majority of modern trailing cables have five cores—three power cores for the three phase a.c. supply, a fourth core for the pilot and a fifth core for the earth. Cores are always insulated with a synthetic insulation such as C.S.P. (Chloro Sulphonated Polyethylene) or E.P.R. (Ethylene Propylene Rubber). Some cores have an insulation of E.P.R. which is then covered with a layer of C.S.P. (two layers of insulation).

    The earth core in some types of trailing cable is not insulated but laid up bare in the centre of the cable. The synthetic compound C.S.P. is a harder insulating compound than rubber, it is more resistant to penetration by broken core or screen wires. It has a low insulation resistance and high capacitance with consequential long charging time when measuring the insulation resistance.

    The insulated cores are laid up in a variety of ways dependent upon the type of cable.

    In some, the cores are laid up in a spiral about a center cradle, the spiral is fairly tight particularly in the case of drill cables so that the cable can flex easily without imposing stresses on the individual cores. In others, either the pilot, or the earth core runs in the center cradle with the other cores laid up around it.

    The majority of modern trailing cables are of the individually screened type where the screens are earthed. The screening provides electrical protection for the cables should it is accidentally damaged and penetrated by a metallic object the object will first make contact with the earthed screen before touching the live core.

    Therefore, the possibility of a short circuit between live cores etc. is greatly reduced, as the earth leakage protection will detect an earth fault and trip the controlling gate-end box before the short circuit is made.

    There are two types of individually screened trailing cables:

    (1) The copper / nylon braided screen and

    (ii) The conductive rubber screen.

    Trailing cables having conductive rubber screens must only be used on a system having sensitive earth leakage which limits the earth fault current to 750 m.a. on power cables and 125 m.a. on drill cables, Trailing cables are sheathed over all around in P.C.P. (Poly-chloroprene).

    (2) Pliable Wire Armoured Cables:

    These cables consist of three or four cores with synthetic insulation on the cores. The core insulation being usually C. S. P. or E.P.R. (or C.S.P. over E.P.R.) for cables operating on system voltage up to 1,100 voltage. For cables operating on systems in excess of 1,100 volts and up to 6,600 volts, the core insulation is butyl or E.P.R.

    The cores are laid up round a center, they are then enclosed in an inner sheath of P.C.P. The armour in-fact consists of a layer of flexible galvanized steel strands laid up in a spiral over the inner sheath, the cable is covered overall by a sheath of P.C.P.

    Copper / Nylon braided screening is provided around each individual power core. In a similar manner and for similar reasons to those previously mentioned earth cores are not screened for trailing cables.

    Trailing cables are normally connected to equipment by means of a plug which mates with a corresponding socket on the equipment. Plugs and sockets are of two kinds, i.e. bolted and restrained types. Bolted plugs and sockets have matching flanges which mate when the plug is fully inserted in the socket, the flanges are then bolted together by studs which screw into the socket flange.

    Restrained plugs and sockets are pulled and held together by an extractor screw. The socket extractor screw has a latch (cam) which engages in a flat on the plug body by screening the screw in the plug, and is pulled into the socket and held in situ. When properly assembled, both bolted and restrained types form flameproof junctions. Here again the flameproof path and gaps must be checked.

    Plugs and sockets with different current and voltage ratings are in use, the ratings used depending upon the loading of the equipment to which the cable is connected as also with reference to the system voltage. The 150 amp. restrained plug and socket is the one most commonly used on voltage up to 660 volts.

    A dual voltage versions of the 150 amp restrained plug and socket has been designed and recently made available. This is suitable for operation on 600 /1100 volt systems and in addition, it has been updated to 200 amps. To differentiate between 660 volt and 1,100 volt, the 1100 volts mode has its insulators and contact tubes turned through 180°. The 660 volt mode is fully interchangeable with the 150 amp 660 volts range.

    However, the 30 amp 660 volt bolted type plug and socket is provided for the small h.p. equipment, the plugs and sockets of different manufacturers are designed to plug into each other. Also in existence are earlier types 1,100 volt plugs and sockets of 50 amp and 150 amp.

    These older types are not interchangeable with the types noted above, also they do not interchange with other manufacturer’s products. In present day’s design inter changeability is a most vital point to consider.

    This is another important feature of electrical engineering. The standard colour code for cable core identification has changed due to metrication. For comparison, the following table gives the new metric colour code along with the old imperial colour code. This is important considering the fact that old codes are still in use and those shall remain in use for years to come.

    Installation:

    Wherever possible, pliable armoured and trailing cables are suspended from roof bars or arches. Where they have to run along the floor, they should be laid to one side where they will be out of the way of passing traffic and exposed to the minimum risk of damage.

    At road heads, cables must be protected by steel channels or pipes. Trailing cables running down the face must be placed where they will not foul machinery, jacks and roof supports, and where they are least likely to suffer damage from work in progress, falls of roof or any other cause.

    Many conveyors are fitted with an armoured channel to receive cables and where such a conveyor is in use, it is a must to ensure that the cable is properly protected by the channel. If the coalface machine is fitted with a cable handling device, ensure that the cable engages with it correctly. Cables are made in standard length and, for this reason, a cable may be longer than the run for which it is to be used.

    The spare length of cable should be taken up by making it in a figure of eight. Do not ever make a circular coil, as this will introduce twists, which could lead to the conductors being strained, or the armouring ‘ ‘bird-caging”. The coils provide a reserve of cable which can be laid out if the run is to be lengthened e.g. between the in-bye substation and the gate-end panels when the face moves forward.

    In fact electrical engineers in mines will always have to be alert to consider the factors to avoid any delay, and thus to prevent any loss of production, and above all to avoid any accident.

    Fault Finding:

    Fault in cables are usually detected because of their effect upon the equipment they serve. A Fault is likely to trip out a contactor or circuit breaker through the earth fault protection or the overload protection. The type of fault can be confirmed and the conductor or conductors affected, can be discovered by caring out the insulation and conductance tests.

    After the type of fault has been known there remain the problem of finding where along the length of the cable has the fault occurred. To find the fault by inspected is laborious, and a fault could be passed unnoticed, unless a very thorough and detailed examination is made. One of the following tests is, therefore, used to find the approximate position of the fault before visual examination begins.

    These tests are most frequently performed in the workshop. If a trailing or pliable armored cable becomes defective, it is replaced by a sound cable and brought to the surface for repair. If a fault should develop on a main distribution line, it may be necessary to perform a test with the cable in position, so that the fault can be repaired on the spot, or only a small section of the cable renewed.

    The tests are of particular value when a fault occurs in a buried cable at the surface.

    Earth Fault Test:

    This test is used to locate a fault between a conductor and the screen or armouring. Several forms of the test are in use, the simplest is the Murray loop test, which uses the principle of Wheatstone bridge. The equipment required and the connection to be made is shown in Fig. 15.3.

    A and B are two variable resistances (or parts of a resistance box).

    The earth fault test is described below:

    1. Isolate both ends of the cable and discharge to earth.

    2. At one end of the cable, connect the faulty conductor to a sound conductor of equal cross-sectional area.

    3. At the other end of the cable, connect the test equipment as a shown in Fig. 15.3.

    4. Switch on the supply and adjust the resistance A & B until the galvanometer reads zero.

    5. The values of the resistances A & B when the galvanometer is at zero -are used to find the fault i.e. the distance (X) to the fault = A /A+B × twice the length of the cable.

    This test is used to find a short circuit between two conductors of a cable. One of the faulty conductors is earthed, and the fault is located by Murray loop test, using the other faulty conductor and the sound conductor, as shown in Fig. 15.4., where we see A & B are two variable resistances (or parts of a resistance box).

    The Galvanometer is balanced at zero by adjusting the resistance.

    Open Circuit Test:

    This test is used to find a break in one of the cable conductors. The principle of the test is to compare the capacitance of one part of the faulty conductor, with the capacitance of the whole of a sound conductor.

    The methods is as follows:

    1. Isolate both ends of the cable and discharge to earth.

    2. At one end of the cable, connect the test equipment as shown in Fig. 15.5. The sound conductor to be used must have the same cross sectional area as the broken conductor.

    3. Earth both ends of the broken conductor and all the conductors in the cable, except the sound conductor to which the supply is to be connected.

    4. Switch the supply on to the sound conductor and allow the conductor to become fully charged.

    5. Immediately connect the charged conductor to the galvanometer and note the time taken for the conductor to discharge. The discharge time is measured from the moment when the switch is connected to the moment when the galvanometer pointer returns to zero.

    6. Disconnect the test equipment from the sound conductor, and earth the conductor.

    7. Remove the earth connection from the test end of the broken conductor, and connect the test equipment to the conductor.

    8. Charge the broken conductor, and find the discharge time.

    9. The distance (X) to the fault

    = Discharge time for broken conductor x length of cable. / Discharge time for sound conductor.

    All the earth system for the various sections of the colliery are, in fact, connected into a single system, which ends somewhere on the surface, where it is connected to the general body of the earth by one or more earth plate connections.

    The safety of the whole electrical system depends upon efficient earthing at the point, and the earth plate connections must therefore be tested from time to time. The test can be carried out with an earth tester (e.g. the Megger), or by fall-of-potential method using the equipment as shown in Fig. 15.6 which explains in detail the method of testing called Earth Plate Test.

    Earth Plate Test:

    This is a very important test the method of testing is as follows:

    1. Disconnect the earth plate to be tested from the electrical system.

    Ensure that the electrical system is still connected to earth by other plates. If there is only one earth plate, the test can be carried out only when the electrical system is shut down.

    2. Insert the two earthing spikes into the ground, placing one about twice as far from the earth plate as the other. Suitable distances would be: PA 12 m, PB 24 m. A large distance is required to ensure that each electrode is well outside the resistance area of the earth plate under test. Ensure that each spike makes a good connection to earth.

    3. Connect the equipment as shown in Fig. 15.6. Correct connections for an earth tester are supplied with the instrument.

    4. Switch on the test supply and note the readings on the two instruments. The reading on the voltmeter, divided by the reading on the ammeter gives a value in ohm for the resistance of the earth plate connection to earth. The resistance can be read directly from an earth tester.

    5. Switch off the supply and move spike B about 6 m. closer to the earth plate, e.g. PA 12 m, PB 18 m.

    6. Switch on the supply and again find the earth plate resistance.

    7. Switch on the supply and move spike B to a position about 6 m. further from the earth plate than its original position, e.g. PA 12 m, PB 30 m.

    8. Switch on the supply and again find the earth plate resistance.

    9. If the three values obtained in steps 4, 6 and 8 lie within about 0.25 ohm of one another, find the average of the three values and accept this as the resistance of the earth plate connection to earth.

    If the three values now show a greater variation it is probable that the test spikes were not located outside the resistance area of the earth plate. It will be necessary to repeat the entire test to find three readings which do not differ by more than 0.25 ohm. Start with test spikes further apart than before.

    A final value of 1 ohm or less indicates a good earth connection. The maximum value which may be accepted is 2 ohms.


    Type1: Temperature

    (i) Thermocouple – They are made of two wires (each of different homogeneous alloy or metal) which form a measuring junction by joining at one end. This measuring junction is open to the elements being measured. The other end of the wire is terminated to a measuring device where a reference junction is formed. The current flows through the circuit since the temperature of the two junctions are different. The resulted milli-voltage is measured to determine the temperature at the junction. The diagram of thermocouple is shown below.

    European Symposium on Computer Aided Process Engineering-12

    Arnoud Nougues , . Rob Snoeren , in Computer Aided Chemical Engineering , 2002

    APC and Base Layer Control Monitoring

    Shell has developed and is in the process of further developing a complete suite of software packages, called MD (Monitoring and Diagnosis) for monitoring the performance of control loops and to assist in troubleshooting loops that fail to meet their performance target. The tools apply to multivariable controls, of Shell technology (SMOC) as well as multivariable controls from any APC vendor, and they apply to traditional Single Input-Single Output loops (e.g. PID controller).

    The central element of MD is a client-server information system for control loop performance tracking. MD is linked to various commercial plant data historians (e.g. Yokogawa’s Exaquantum, OSI PI), where the basic real-time control loop status and performance information resides. Each day performance statistics are automatically calculated and stored in a dedicated Relational Data Base. Control engineers are notified if control loops are performing below predefined targets by daily email summary reports. Next to this, the user can enter the report mode where the statistical information can be browsed.

    For every control loop and Controlled Variable (CV), MD provides the following statistical information:

    % in Service: optional controller and unit availability tags are monitored to determine whether the controller is in service.

    % Uptime: loop uptime is determined from the controller mode status.

    % in Compliance: this statistic indicates if a CV deviates significantly from either a setpoint or Min./Max. limits. The bound to indicate a significant deviation is determined from a user specified tolerance (CL) for each CV. If the CV is within the ± CL bound about the control limits (set range), the CV is considered to be in compliance. The information is reported as daily and monthly averages based on calculations carried out using typically one-minute data.

    %in Service, %Uptime, %in Compliance together with a user defined cost factor are used to derive a cost incentive which is reported on a daily and monthly basis to the user (PONC: Price of Non-Conformance).

    Monitoring loop performance is not sufficient. Additional tools are required to help analyse loop performance related problems and troubleshoot under-performing loops efficiently. A number of innovative proprietary loop performance diagnosis techniques have been developed by Shell and are part of the MD suite of packages:

    Average closed loop response curves: both CV error and MV response curves are calculated and plotted. Average response curves provide a visual summary of the shape and response time of SISO as well as multivariable control loops, in response to the actual disturbances affecting the process and in response to setpoint changes. The average response curves are derived from fitting an ARMA (Auto-Regressive Moving Average) model to the loop time series data, typically over several hours or days of normal closed loop operation.

    Plot of sliding window CV error standard deviation comparison with best achievable performance from Minimum Variance Controller (MVC): the CV error standard deviation is calculated over a representative time span, and then the calculation is repeated by sliding the window from the start to the end of the data time range. The standard deviation that would have been achieved by the fastest possible feedback controller (MVC) is shown on a parallel plot. The CV error standard deviation plots are useful in assessing the loop performance in relative terms (achieved CV error standard deviation, and how it evolves in time) as well as in absolute terms (comparison with reference MVC controller).

    Degrees of freedom and constraint analysis: this technique applies to multivariable control applications. The idea is to track and report which controlled variables in a closed loop multivariable system are active (i.e. driven to their upper or lower specification limit) and how often these are active. Correspondingly, the activity status of the manipulated variables is reported, i.e. which MV’s are limited or unavailable and how often. This information, presented in the form of bar plots and trends, provides insight into the activity and performance of a complex multivariable controller, and helps diagnose control structure problems (e.g. insufficient degrees of freedom to achieve the required control objectives).


    Automobile Technology

    III.B.6 Electronic Systems

    There are four fields in which the new techniques of electronic data acquisition and processing are used:

    Control functionalities required for the operation of the vehicle which were partly carried out mechanically, hydraulically, or pneumatically in the past. Examples are overall engine manangement systems to cut back on fuel consumption or pollutant emissions as well as transmission control in order to choose the optimal ratio of transmission or to improve the shifting operation in the automatic transmission.

    Control functionalities concerning the movement of the vehicle on the road which also improve the active and passive safety by recognizing when the driver reacts too late or in an inappropriate fashion and correcting his steering and braking maneuvers in good time. Examples of the combination of axles, steering, and brakes with electronic data processing follow. The antilock brake system (ABS Section 4 ) is sometimes backed up by brake assist, which shortens the stopping distance of a vehicle in an emergency braking situation. The brake assist recognizes a fear-induced reaction from the speed at which the driver depresses the brake pedal. It then proceeds to develop maximum brake power boost from the start of braking. The acceleration skid control system (ASR) prevents the drive wheels from slipping. ASR is an electronically controlled intervention of the engine power output and the brakes which is being used increasingly in both passenger cars and commercial vehicles. The electronic stability program (ESP) ( Fig. 17B ) drastically reduces skidding and sliding tendencies in the moving vehicle. The system detects potential dangerous situations on the basis of different sensor signals (e.g., rotational speed of the wheel, yaw angle, and steering angle) faster than even the most experienced driver can do. ESP than intervenes with high precision and stabilizes the vehicle with exactly measured activation of the brakes and/or reduction of the engine power output. Electronic control of the brakes and engine power helps in utilizing all the possible friction forces between the wheels and the road, but only within their physical limits. The electronic distance control regulates the distance to the vehicle in front via radar or infrared sensors, intervenes in the engine management, and, if necessary, activates the brakes. Activation of the airbag and belt tensioner is controlled by a deceleration sensor, which is rigidly connected to the car body in a central location ( Section VI ).

    Information and communication transfer between the vehicle, the driver, and the outside world. For decades, the car radio provided the only—one-way—link with the outside world, and it was one way. Recently, the mobile phone ushered in an era of two-way communication. Satellite-assisted navigation systems to keep track of the exact position of a vehicle and an operating and display monitor, the basic components of an in-vehicle communication or telematics module, are now part of a modern car. This permits a wide range of new applications, including navigation systems to guide drivers to their destination with the aid of a CD-ROM road map. If current traffic data are also piped into the vehicle via the data interface ( Fig. 17C ), the route recommendation can take into account the traffic situation and, wherever possible, help avoid delays.

    Monitoring of all important vehicle functions such as ABS, airbags, lighting, etc. These functions are constantly checked. In case there is a fault, the driver receives a warning indication and/or buzzer.

    Despite these advances, however, there is no chance of electronics taking over entirely for the driver in the foreseeable future. Drivers will retain responsibility for monitoring the traffic situation themselves and reacting in the event of danger.