How do I modify redshifts to gain corrected line of sight velocities?

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I'm currently trying to collect the data to run an N-body simulation for 11 of the galaxies in the Local Group where proper motions are known, however I don't understand how to get the required line of sight velocities for the simulation.

I have redshift data from NED for the required galaxies, however by judging what sources tell me, this won't be the required information for the simulation due to our motion around the Milky way's galactic centre.

For example, one source has given the NED redshift as 229 km/s for Carina Dwarf, but the corrected line of sight velocity is 14.3 km/s.

Does anybody know how to convert these redshifts into the desired corrected versions?

Thanks!!

You need a model for the motion of the Sun with respect to the Milky Way centre. You then have to subtract the component of this that is resolved towards the galaxy in question.

The solar motion around the Galaxy is somewhat uncertain, but is roughly (11, 240, 7) km/s when expressed as a vector aligned with Galactic coordinates (i.e. towards the Galactic centre, tangential to the Galactic centre and perpendicular to the Galactic disk).

Redshift of distant galaxies: why not a doppler effect?

How can I explain to my 17 year old pupils that the observed redshift of distant galaxies cannot be interpreted as a doppler effect and inescapably leads to the conclusion that space itself is expanding?

I understand that this redshift is well explained in general relativity (GR) by assuming that space itself is expanding. As a consequence, distant galaxies recede from us and the wavelength of the light is "streched". Expansion, redshift and the Hubble law are explained coherently in GR, as well as many other phenomena (e.g. the cosmic microwave background), and the GR predictions about redshift agree with observations.

I understand that the redshift of distant galaxies cannot be explained as a doppler effect of their motion through space. Why exactly is a pupil's doppler interpretation wrong?

My first answer: "Blueshifted galaxies (e.g. Andromeda) are only seen in our local neighborhood, not far away. All distant galaxies show a redshift. At larger distances (as measured e.g. with Cepheïds) the redshift is larger. For a doppler interpretation of the redshift distant galaxies we must necessarily assume that we are in a special place, to the discomfort of Copernicus. In this view, space cannot be homogeneous and isotropic." Is this answer correct?

My second answer: "A doppler effect only occurs at the moment the light is emitted, whereas the cosmological redshift in GR grows while the light is traveling to us." My problem with this answer (if it is correct): what observational evidence do we have for a gradual (GR) increase of the redshift, disproving the possibility of an "instantaneous doppler shift at the moment of emission"?

My third answer: "For galaxies at $z>1$ you can only have $v<c$ if you use the doppler formula from special relativity (SR): $v=frac<(z+1)^2-1><(z+1)^2+1>cdot c$". My problem with this answer: what's wrong with using the doppler formula from SR as long as someone views the universe as static, in a steady state? With just the right amount of dark energy to balance the gravitational contraction, if you wish?

My fourth answer: "Recent observations of distant SN Ia show a duration-redshift relation that can only be explained with time dilation [see Davis and Lineweaver, 2004, "Expanding Confusion etc."]" My problem with this answer: does time dilation prove we have expanding space, in disagreement with a doppler effect?

My fifth answer would involve the magnitude-redshift relation for distant SN Ia [Davis and Lineweaver], but that's too complicated for my pupils.

ABSTRACT

Baryon Acoustic Oscillations (BAOs) provide a "standard ruler" of known physical length, making it one of the most promising probes of the nature of dark energy (DE). The detection of BAOs as an excess of power in the galaxy distribution at a certain scale requires measuring galaxy positions and redshifts. "Transversal" (or "angular") BAOs measure the angular size of this scale projected in the sky and provide information about the angular distance. "Line-of-sight" (or "radial") BAOs require very precise redshifts, but provide a direct measurement of the Hubble parameter at different redshifts, a more sensitive probe of DE. The main goal of this paper is to show that it is possible to obtain photometric redshifts with enough precision (σz) to measure BAOs along the line of sight. There is a fundamental limitation as to how much one can improve the BAO measurement by reducing σz. We show that σz

0.003(1 + z) is sufficient: a much better precision will produce an oversampling of the BAO peak without a significant improvement on its detection, while a much worse precision will result in the effective loss of the radial information. This precision in redshift can be achieved for bright, red galaxies, featuring a prominent 4000 Å break, by using a filter system comprising about 40 filters, each with a width close to 100 Å, covering the wavelength range from

8000 Å, supplemented by two broad-band filters similar to the Sloan Digital Sky Survey u and z bands. We describe the practical implementation of this idea, a new galaxy survey project, PAU 16 , to be carried out with a telescope/camera combination with an etendue about 20 m 2 deg 2 , equivalent to a 2 m telescope equipped with a 6 deg 2 field of view camera, and covering 8000 deg 2 in the sky in four years. We expect to measure positions and redshifts for over 14 million red, early-type galaxies with L > L and iAB 22.5 in the redshift interval 0.1 < z < 0.9, with a precision σz < 0.003(1 + z). This population has a number density n 10 −3 Mpc −3 h 3 galaxies within the 9 Gpc 3 h −3 volume to be sampled by our survey, ensuring that the error in the determination of the BAO scale is not limited by shot noise. By itself, such a survey will deliver precisions of order 5% in the dark-energy equation of state parameter w, if assumed constant, and can determine its time derivative when combined with future cosmic microwave background measurements. In addition, PAU will yield high-quality redshift and low-resolution spectroscopy for hundreds of millions of other galaxies, including a very significant high-redshift population. The data set produced by this survey will have a unique legacy value, allowing a wide range of astrophysical studies.

Export citation and abstract BibTeX RIS

3 ANALYSIS

3.1 Defining the narrow C iv sample

For this study, we want to exclude the more obvious outflows lines, the BALs and mini-BALs ( Section 1). We therefore removed from our sample all absorbers with measured FWHM > 600 km s −1 . We also removed all absorbers blended with absorbers having FWHM > 600 km s −1 as their identification can often be ambiguous. This helps reduce uncertainties associated with deblending and counting but, in the end, has little impact on our results because the number of such systems is small (<1 per cent of the sample). Finally, several spectra in the sample contain extremely strong BAL absorption which removes flux from a large swathe of continuum. We computed both the BALnicity Index (BI Weymann et al. 1991) and the Absorption Index (AI Hall et al. 2002) to quantify these absorbers (see Rodriguez Hidalgo et al. for more discussion). After visual inspection, it was decided to remove spectra with AI >3120 and/or BI >1250, which totalled 62 quasars. We note that this is not the equivalent of removing BAL quasars rather, we remove only those with little ‘usable’ unabsorbed continuum (although, for the BAL quasars kept in the sample, we do not include narrow C iv systems that are blended with BALs.)

The final sample includes 1409 quasars, which provide a catalogue of 2566 narrow C iv absorbers with 0.1 < W λ1548 0 < 2.8Å and −3000 < vabs < 70 000 km s −1 , 2009 of which have W λ1548 0 > 0.3Å. We note that W λ1548 0 and FWHM are correlated, so the removal of FWHM > 600 km s −1 systems biases our catalogue against very strong systems. Fig. 3 shows a scatter plot of W λ1548 0 versus quasar-frame velocity for the final narrow C iv catalogue.

Scatter plot of W λ1548 0 versus quasar-frame velocity for our narrow (FWHM ≤ 600 km s −1 ) absorber sample. Velocities shown are for the Mg ii -corrected redshifts assuming no Mg ii velocity zero-point offset.

Scatter plot of W λ1548 0 versus quasar-frame velocity for our narrow (FWHM ≤ 600 km s −1 ) absorber sample. Velocities shown are for the Mg ii -corrected redshifts assuming no Mg ii velocity zero-point offset.

3.2 Computing the completeness correction

Our data set is complete in neither W λ1548 0 nor velocity space. It is therefore necessary to compute a two-dimensional completeness correction. We follow a procedure similar to that described in Nestor et al. (2005), whereby we determine the minimum W λ1548 0, Wmin, detectable at our imposed significance-level threshold at every pixel and half-pixel in each spectrum. We then construct an array on a grid of velocity and W λ1548 0 steps with values corresponding to the number of spectra for which a C iv λ1548 line of the given W λ1548 0 could have been detected at the given velocity, accounting for the removed regions of spectra having absorbers with FWHM > 600 km s −1 . This ‘completeness array’ can then be used to correct any computed incidence of absorbers in both W λ1548 0 and velocity space. In Fig. 4, we show the number of usable spectra as a function of velocity for different minimum W λ1548 0 values. The 0.5- and 0.6-Å curves are nearly identical – above W λ1548 0≃ 0.5Å our sample is complete in W λ1548 0 at all velocities. Although the minimum detectable W λ1548 0 depends, in principle, on the absorption FWHM, in practice, it is computed assuming unresolved lines. However, any detection bias arising from this would only be relevant for lines with W λ1548 0Wmin that are also resolved in velocity. However, lines weaker than ≈0.5 Å (the strength above which our survey is complete) are generally unresolved in the SDSS spectra.

The number of usable spectra as a function of velocity corresponding to W λ1548 0 values of 0.1, 0.2, 0.3, 0.4, 0.5 and 0.6 Å, using Mg ii -corrected redshifts assuming no Mg ii velocity-offset. The 0.5- and 0.6-Å curves are nearly identical – above W λ1548 0≃ 0.5Å our sample is complete in W λ1548 0 at all velocities.

The number of usable spectra as a function of velocity corresponding to W λ1548 0 values of 0.1, 0.2, 0.3, 0.4, 0.5 and 0.6 Å, using Mg ii -corrected redshifts assuming no Mg ii velocity-offset. The 0.5- and 0.6-Å curves are nearly identical – above W λ1548 0≃ 0.5Å our sample is complete in W λ1548 0 at all velocities.

3.3 Results

3.3.1 The quasar-frame velocity distribution

The incidence of C iv absorbers versus quasar-frame velocity for systems with W λ1548 0≥ 0.3Å, using Mg ii -corrected redshifts and assuming no Mg ii velocity-offset. The horizontal bars indicate the velocity bins and vertical bars the 1σ uncertainty from counting statistics. The curves are maximum-likelihood fits to the data in the intervals v < 0 and 40 000 < v < 60 000 km s −1 , representing the ‘environmental’ and ‘intervening’ components, respectively (see the text).

The incidence of C iv absorbers versus quasar-frame velocity for systems with W λ1548 0≥ 0.3Å, using Mg ii -corrected redshifts and assuming no Mg ii velocity-offset. The horizontal bars indicate the velocity bins and vertical bars the 1σ uncertainty from counting statistics. The curves are maximum-likelihood fits to the data in the intervals v < 0 and 40 000 < v < 60 000 km s −1 , representing the ‘environmental’ and ‘intervening’ components, respectively (see the text).

Environmental and intervening fitted parameters.

 〈ΔvMg II〉 (km s −1 ) Nenv σenv (km s −1 ) σ′env (km s −1 ) Ninter 0 31.5 681 442 6.4 +102 26.8 665 430 6.4 −97 35.4 694 452 6.4 SDSS 25.9 1319 913 6.5
 〈ΔvMg II〉 (km s −1 ) Nenv σenv (km s −1 ) σ′env (km s −1 ) Ninter 0 31.5 681 442 6.4 +102 26.8 665 430 6.4 −97 35.4 694 452 6.4 SDSS 25.9 1319 913 6.5

Note. The σenv values are the Gaussian widths before and after deconvolution and the N values are the normalization of the fits – that is, the incidence for intervening systems and the incidence of environmental systems at v= 0.

Environmental and intervening fitted parameters.

 〈ΔvMg II〉 (km s −1 ) Nenv σenv (km s −1 ) σ′env (km s −1 ) Ninter 0 31.5 681 442 6.4 +102 26.8 665 430 6.4 −97 35.4 694 452 6.4 SDSS 25.9 1319 913 6.5
 〈ΔvMg II〉 (km s −1 ) Nenv σenv (km s −1 ) σ′env (km s −1 ) Ninter 0 31.5 681 442 6.4 +102 26.8 665 430 6.4 −97 35.4 694 452 6.4 SDSS 25.9 1319 913 6.5

Note. The σenv values are the Gaussian widths before and after deconvolution and the N values are the normalization of the fits – that is, the incidence for intervening systems and the incidence of environmental systems at v= 0.

Our assumption that none of the absorbers at v > 40 000 km s −1 or v < 0 arise in outflows means that our derived outflow fractions (see below) are, strictly speaking, lower limits at all velocities. For example, Richards et al. claim that as many as 36 per cent of C iv absorbers at 5000 < v < 55 000 km s −1 form in outflows, while Misawa et al. (2007) claim that the fraction of outflow lines at these velocities is ≃10–17 per cent. However, it is unclear what the contribution of very high-v systems (i.e. v > 40 000 km s −1 ) to the Richards result is, and the Misawa et al. sample contains no systems at v > 40 000 km s −1 that are classified as a reliable narrow-line outflow candidates. Thus, while we cannot directly test our high-velocity assumption, it is likely that we are only slightly, if at all, overestimating the contribution of intervening systems.

The assumption that all v < 0 absorbers are environmental is potentially less reliable, however. It will be an appropriate assumption if there are no ‘infalling’ systems intrinsic to the AGN flow and all of the individual quasar velocity zero-points are accurate (or that there are no outflowing systems with velocities that are small relative to the velocity zero-point dispersion). However, our zero-points are inaccurate with an (assumed) rms of ∼270 km s −1 , and the systematic velocity-offset of Mg ii from the true velocity zero-point is not well constrained (see Section 2.2). Thus, it is possible that our environmental fits are also accounting for some low-velocity outflow systems. We investigate this issue below.

The uncertainty in the appropriate velocity zero-point has a direct effect on the measured velocity-space distribution of absorbers. In order to determine the magnitude of this potential bias and investigate its affect on the computed outflow fraction, we computed as well as the maximum-likelihood fits using all four of the quasar redshift determination methods discussed in Section 2.2. Fig. 6 shows the low-velocity region of computed using: Mg ii -determined redshifts with (a) no Mg ii velocity zero-point offset (i.e. 〈ΔvMg II〉= 0), (b) 〈ΔvMg II〉=+102 km s −1 , (c) 〈ΔvMg II〉=−97 km s −1 , as well as (d) SDSS redshifts. The curves represent the data well in the fitted regions, for all of the offsets considered. However, the distribution determined assuming 〈ΔvMg II〉=−97 km s −1 overpredicts the incidence at v≳ 0. We note that, if the velocity zero-point is (on average) properly determined, the incidence for v≳ 0 should be larger than that for v≲ 0, since the v≳ 0 data contain, in principle, environmental, intervening, and outflowing C iv systems, and we assume that the environmental component is reasonably symmetric about the true velocity zero-point. We obtain a value of σ′env= 913 km s −1 using the SDSS redshifts, which is unphysically large unless the quasar environments are predominantly in large clusters (e.g. Becker et al. 2007). However, Outram et al. (2003) show that the clustering amplitude of quasars at z∼ 1.4 is similar to that of present-day galaxies. The Mg ii -determined redshifts lead to values of 430 ≲σ′env≲ 450 km s −1 for the dispersion, which is comparable to the usual division between large groups and poor clusters (e.g. Mulchaey 2000).

The same as Fig. 5, but employing Mg ii -determined redshifts with (a) no Mg ii velocity zero-point offset, (b) 〈ΔvMg II〉=+102 km s −1 , (c) 〈ΔvMg II〉=−97 km s −1 , as well as (d) the SDSS-provided redshifts. The width of the environmental component determined from SDSS redshifts is unphysically large, and that determined assuming 〈ΔvMg II〉=− 97 km s −1 gives a poor fit to the v∼ 0 data.

The same as Fig. 5, but employing Mg ii -determined redshifts with (a) no Mg ii velocity zero-point offset, (b) 〈ΔvMg II〉=+102 km s −1 , (c) 〈ΔvMg II〉=−97 km s −1 , as well as (d) the SDSS-provided redshifts. The width of the environmental component determined from SDSS redshifts is unphysically large, and that determined assuming 〈ΔvMg II〉=− 97 km s −1 gives a poor fit to the v∼ 0 data.

Considering the quality of the fits, we conclude that, while we do not explicitly reject any of the Mg ii -corrected redshift choices, an average Mg ii velocity zero-point offset of ≃0 to +100 km s −1 is most appropriate for our sample. Except when explicitly stated, we will use 〈ΔvMg II〉= 0 for the subsequent analyses. We also note that we computed for ranges of W λ1548 0, but found no significant trends.

3.3.2 The outflow fraction and incidence of outflowing absorbers

The top panel of Fig. 7 shows the data minus the sum of the two fits, divided by the data, which represents the minimum outflow fraction, foutflow (which is considered a minimum for the reasons described above). We find that foutflow increases strongly from v= 0 to v≃+2000 km s −1 , where it peaks with foutflow≃ 0.81 ± 0.13, and then decreases slowly out to v∼+12 000 km s −1 . Over the range where there is significant evidence for a non-zero outflow fraction, v≃+750 km s −1 to v≈+12 000 km s −1 , we find 〈foutflow〉= 0.43 ± 0.06. Over narrower ranges near the peak, we find 〈foutflow〉= 0.57 ± 0.10 for +1250 < v <+6750 km s −1 , and foutflow≃ 0.72 ± 0.07 for v≃+1250 km s −1 to v≈+3000 km s −1 . The outflow fraction decreases below v≈+2000 and disappears below v≈+750, apparently indicating an effective minimum projected ejection velocity for narrow C iv systems. It is also possible that systematics such as a strong proximity effect for the intervening absorbers or limitations in our ability to properly count systems contribute to this decrease. We discuss these and other possible biases in Section 3.4.

Top panel: the excess of the data over the sum of the two fits, divided by the data, which represents the minimum outflow fraction. Bottom panel: the data minus our fits to the environmental and intervening systems, representing for the ejected (i.e. outflow) component only.

Top panel: the excess of the data over the sum of the two fits, divided by the data, which represents the minimum outflow fraction. Bottom panel: the data minus our fits to the environmental and intervening systems, representing for the ejected (i.e. outflow) component only.

In the bottom panel of Fig. 7, we show the data after subtraction of the sum of the fits (using 〈ΔvMg II〉= 0), which represents for the outflow component only. Using 〈ΔvMg II〉=−97 km s −1 (i.e. Mg ii emission redshifted from the quasar rest-frame) or SDSS-determined redshifts leaves, respectively, marginally and strongly statistically significant negative, and thus unphysical, residual incidence for the outflow component at some velocities. Using 〈ΔvMg II〉= 0 (i.e. Mg ii emission unshifted from the quasar rest-frame), the incidence of systems is consistent with zero at low positive velocities: in this case, our assumptions that no absorbers arising in outflows have measured velocities with v < 0 is likely valid. Attempts to fit the entire span of velocities (i.e. including 0 < v < 40 000 km s −1 , see Section 3.3.1) with, for example, a power law or exponential for v > 0, failed since, as is clear from the bottom panel of Fig. 7, these were not appropriate descriptions of the outflowing component. Considering the results employing Mg ii -corrected quasar redshifts, we find that the incidence of narrow C iv absorbers with W λ1548 0 > 0.3Å presumably forming in accretion-disc outflows peaks at v≃ 2000 km s −1 and exhibits a wing that extends out to at least v≃ 9000 km s −1 , for all 〈ΔvMg II〉 values considered.

3.4 Possible systematics

In this section, we discuss potential biases introduced by the method we employ to ‘count’ systems as well as the assumptions behind the modelling of the distribution of cosmologically intervening absorbers.

As narrow C iv absorbers cluster at low (≲1000 km s −1 ) quasar-frame velocities ( Nestor, Hamann & Rodriquez Hidalgo 2007), removing regions with broad absorption is more likely to also remove narrow absorbers, compared to randomly selected regions at similar velocity. Since broad absorbers are more common at low positive velocity, it is a concern that this bias may affect the measured distribution of narrow systems. Furthermore, this region of velocity space exhibits (in principle) all three of our categories of absorbers, thus increasing the chances for blending of outflow and non-outflow systems to occur. Therefore, the apparent dearth of narrow outflow-component systems at v≲ 2000 km s −1 ( Fig. 7) could conceivably be in part a byproduct of our removal of broad systems and absorbers blended with broad systems. To test this, we re-computed without rejecting any absorbers based on FWHM or blending with broad systems. However, the only qualitative difference in our results was an extension of the high-velocity tail of outflowing systems out to v≈ 25 000 km s −1 , indicating that broad (FWHM > 600 km s −1 ) absorbers have a larger extent in ejection velocity than do narrow systems. This will be discussed in further detail in Rodriguez Hidalgo, Hamann & Nestor (in preparation).

Alternatively, overzealous de-blending could, conceivably, influence both the shape and the magnitude of ⁠ . To investigate this concern, we combined all absorbers with velocity differences less than the sum of the two HWHM values plus 150 km s −1 (i.e. the SDSS resolution) into a single absorber to be counted once. We then recomputed and the fits with this reduced catalogue. This resulted in a slight decrease in the outflow fraction for bins where it was non-zero, and virtually no change in the shape or range over which foutflow > 0 is significant. Even when we took the extreme step of combining all absorbers within 500 km s −1 plus the sum of the two HWHM values, the shape of for outflow-only systems remained the same, although the magnitude was reduced by ≃20–25 per cent. Thus, we believe that any inconsistency in our counting method is at a level small enough to be safely ignored.

Another concern is that cosmologically intervening systems may not be uniformly distributed in quasar-frame velocity for our quasar sample. For example, it is known that at high z the incidence of intervening C iv absorbers decreases with increasing redshift ( Sargent, Boksenberg & Steidel 1988 Misawa et al. 2002). Larger redshift correlates with smaller velocity in Figs 5–7. According to Monier et al. (2006), the incidence of W λ1548 0≥ 0.3Å systems shows little evolution over the range 1.46 < z < 2.25. Misawa et al., however, claim a ≃50 per cent decrease over the same range. While the decrease in foutflow is too abrupt to be entirely caused by redshift effects, we nonetheless investigated the maximum possible magnitude of this effect by running 50 Monte Carlo simulations of intervening systems in our data. We randomly distributed absorbers in the spectra of our sample in redshift space using the parametrization from Misawa et al., and converted the resulting distributions into velocity space to determine the modelled ⁠ . This resulted in ≲10 per cent decrease in from v≈ 45 000 km s −1 to v= 0, or a difference of ≲0.7 in ⁠ .

Potentially more important is the possibility of a strong proximity effect, similar to that seen for Lyman α forest lines, causing a decrease in for the intervening systems. While the magnitude of any proximity effect likely depends on the line-strengths being considered, C iv has a much higher ionization energy than does H i (64.5 eV compared to 13.6 eV) and thus any C iv proximity effect should be significantly weaker than that for H i . Thus, we ran 50 additional Monte Carlo simulations of intervening systems using the proximity effect results for the Lyman α forest from Scott et al. (2000), considering this a strong upper limit to any C iv proximity effect. Doing so, we find a linear decrease in from no difference at v= 2000 km s −1 to a 50 per cent deficit of systems at v= 0. This corresponds to a difference in at v= 0 of ≃3.5. Thus, a C iv proximity effect as strong as what is claimed for Lyman α would only marginally change the results described above, and then only at v∼ 0.

Finally, we also considered the effect of an underestimation of the velocity zero-point dispersion. A larger dispersion would cause more low-velocity intervening systems to scatter to v < 0. We repeated the modelling of the environmental and intervening components doubling the velocity zero-point dispersion and found no qualitative changes to our results.

We are limited in our ability to determine true individual velocity zero-points to deblend (‘count’) absorbers into physically distinct systems with complete accuracy, and to know the true distributions of the non-outflowing absorber populations. These systematics almost certainly affect our results to some degree. None the less, the maximum estimates of the effects from all of the systematics that we considered are relatively small and have no significant impact on our qualitative results.

The simple formula is just the first-order expansion of the more complicated one about $v = 0$, the latter being exact for the Doppler effect of motion purely along the line of sight. The $v$ here refers to the peculiar motion of the galaxy.

Be aware that for all but the very nearest galaxies, the observed redshift comes almost entirely from the expansion of the universe, not from relative motion in the special-relativistic sense. Thus converting from redshift to velocity using either of the formulas mentioned, though a very common practice, can be misleading. For a thorough albeit technical discussion of subtleties related to this point, there is a paper by Davis and Lineweaver.

Edit: Since I have lately been using NED a lot, I came across this page in their documentation. Point 1 in particular notes that "no relativistic correction is applied" and so you may see "velocities in excess of the speed of light." (It also says $v = z/c$, but I hope that's just a typo.) There are two important points here. The first is that you can safely assume the values reported are redshift times the speed of light, possibly with a correction to a certain reference frame. The second is that even NASA is under the misconception that redshift of distant galaxies has something to do with Doppler shift, when this is just fundamentally false. The quantity $zc$ is really just a way of putting units to redshift, nothing more.

5 IMPACT ON MEASUREMENTS OF H0

Neglecting the cosmology dependence and using v(z) = cz would overestimate the Hubble parameter by ΔH0 ∼ 1 km s −1 Mpc −1 for a sample evenly distributed in redshift out to z ∼ 1, so it is very important to include the cosmology dependence of v(z) when measuring H0. The fear that doing so renders the measurement of H0 cosmology dependent is unfounded. All remotely viable Friedmann–Robertson–Walker cosmological models are closer to the fiducial model mentioned above than they are to v = cz. So, no matter what the cosmological model, the full expression for velocity using any fiducial model will always be better than the linear approximation in z, and the cosmology dependence of the resulting H0 measurement is weak.

How do I modify redshifts to gain corrected line of sight velocities? - Astronomy

Sighting-In a Rifle to Shoot Different Loads

Suppose one has a hunting rifle with which he/she wishes to shoot loads with different bullet weights and velocities. What are the principles and guidelines that may be used to sight in the rifle to seamlessly change from using one load to another? The answer, I believe, is both simpler and more complicated than one might imagine.

After thinking about this issue for some time and crunching some ballistic data, I came up with an example scenario that illustrates both the simplicity and complications of changing from one load to another.

Suppose that I have a .308 Winchester rifle, with a 22-inch barrel, and I want to be able to confidently shoot hunting loads with 150, 165, and 180 grain bullets. These loads are not going to shoot to the same points of impact at extended ranges, due to variations in the bullet weights and ballistic coefficients, along with the differences in muzzle velocities (MV) at which the bullets are driven. Here are some example data that illustrate the issues involved in shooting different loads from the same rifle.

MPBR and ballistic data for example loads

Consider the following .308 Winchester handloads:

• 150 grain Hornady SP bullet at 2700 fps MV BC .338
• 165 grain Hornady SP bullet at 2600 fps MV BC .387
• 180 grain Hornady SP bullet at 2500 fps MV BC .425

These example loads are based on data in the Hornady Handbook of Cartridge Reloading (10th edition, 2016). MVs are for test loads fired in 22-inch barrels and are attained by several different powders and charge weights listed in the tables. The bullet is the Hornady flat-base InterLock Spire Point, a well proven big game bullet design. Other load specifications are listed on the first page of the .308 Winchester data section of the Hornady Handbook.

At Guns and Shooting Online , we believe the best way to sight-in hunting rifles is for their Maximum Point Blank Range (MPBR). Given the load specifications above, it is easy to use online ballistics programs to calculate the MPBR of each load, plus other relevant data.

Using the shooterscalculator.com Point Blank Range calculator yields the following results, with the program set for a +/- 3 inch MPBR, i.e., a 6 inch target diameter. (In addition to the MPBR, I show the Far Zero [FZ], Near Zero [NZ] and 100 yard sight-in elevation [100 S/I] for each load.)

• 150 grain load: MPBR 258 yds., FZ 221 yds., NZ 23 yds., 100 S/I 2.80 in.
• 165 grain load: MPBR 253 yds., FZ 216 yds., NZ 23 yds., 100 S/I 2.82 in.
• 180 grain load: MPBR 246 yds., FZ 210 yds., NZ 22 yds., 100 S/I 2.87 in.

From this, it might appear that if the rifle were sighted in for (say) the 150 grain load, then the 165 grain load should shoot very close to its indicated MPBR with the same sight setting. Meanwhile, the 180 grain load might fall slightly short of the indicated MPBR, because the optimal 100 yard sight-in elevation is 0.07 inch higher than that of the 150 grain bullet load. Further analysis will show whether this superficial conclusion is correct.

Trajectory analysis of the loads uses the Shooters Calculator Ballistic Trajectory calculator. The key variables, conventional (G1) bullet BC, bullet weight (grains) and MV (f.p.s.) are entered, along with the desired zero range (far zero). The program's default setting is for a sight height of 1.5 inches (a low mounted hunting scope with an objective of 40mm or less), which I did not change.

I set the "chart range" for 260 yards, which is just longer than my longest MPBR. The "chart step size" can be set to show external ballistics in 1, 5, 10, 20, 25, 50, or 100 yard increments. I set this to 1 yard for this analysis, although I usually do ballistics analyses for one of the larger increments, which makes for a smaller output table.

There are also changeable parameters for shooting angle, wind speed, wind angle and ambient shooting conditions (altitude, temperature, barometric pressure, humidity). I left these at their default values.

Here are the key results if the rifle is sighted in for a +/- 3 inch MPBR with the 150 grain load. The data is in the format: range in yards / bullet trajectory in inches.

150 grain load: 24 yds. / +0.01 in. 100 yds. / +2.80 in. 122 yds. / +3.0 in. (apogee) 200 yds. / +1.22 in. 221 yds. / +0.02 in. (far zero) 259 yds. / -3.01 in. (MPBR)

Note that the trajectory calculator yields near zero and MPBR values one yard longer than indicated by the point blank range calculator. Such small differences pop-up from time to time between the two calculators, but are of no consequence.

With this data in hand, it would be a routine task to sight in a rifle for this load. I suggest a step-by-step procedure for Sighting In a Rifle for Maximum Point Blank Range in a companion article.

What will trajectories be if 165 or 180 grain loads are shot from the rifle with the sight set for MPBR with the 150 grain load? To evaluate this, I make the assumption that all three loads will have the same height of trajectory (elevation) at near zero distance (25 yards or less), but then the heavier, slower loads will lose height, relative to the 150 grain load, as the range lengthens.

I am comfortable with this assumption, because I have done the math for a variety of high-intensity cartridge / load sets. Generally, near zeros for +/- 3 inch MPBR analyses are at, or very close to, 25 yards downrange, while bullet elevation differences for different bullet weight loads in the same cartridge vary no more than a few hundredths of an inch at near zero range.

Applying the ballistic trajectory calculator under this assumption involves shortening the far zero values entered in the program, for both the 165 and 180 grain loads, to the point that 24 yard trajectory height will be 0.01 inch. Here are the results.

165 grain load: 24 yds. / +0.01 in. 100 yds. / +2.65 in. 120 yds. / +2.77 in. (apogee) 200 yds. / +0.61 in. 210 yds. / +0.02 in. (far zero) 249 yds. / -3.02 in. (MPBR)

180 grain load: 24 yds. / +0.01 in. 100 yds. / +2.49 in. 113 yds. / +2.55 in. (apogee) 198 yds. / +0.01 in. (far zero) 237 yds. / -2.99 in. (MPBR)

The major result of this analysis is that the 165 and 180 grain loads will not shoot to an optimal +/- 3 inch MPBR with the sight zeroed for the 150 grain load. MPBR of the 165 and 180 grain loads will be 4 and 9 yards shorter than optimal, respectively. Meanwhile, the apogee of the 165 grain load will be about 1/4 inch below 3 inches, that of the 180 grain load nearly 1/2 inch below 3 inches.

One is left, then, with some decisions to make regarding how to cope with the fact that all three loads will not shoot to optimal MPBR when the rifle is sighted-in for one of them. The simplest response is to sight-in for one of the loads and let the others fall where they may.

For instance, if I were mainly using the rifle to hunt deer, I would sight-in the rifle for the 150 grain load. I have used a .308 Winchester to hunt deer for many years, using 150 grain loads exclusively, with great success. The general principle would be to sight-in the rifle for the load that would be used most, then be aware of any meaningful difference in MPBR and apogee if another load were used.

Alternatively, one could split the difference among multiple loads. In this example, sight in the rifle for optimum MPBR with the 165 grain load, calculate (and range test) the trajectories of the 150 and 180 grain loads with that sight setting, and keep records on the differences in MPBR and trajectory, so you know what to expect when you switch from one load to another.

Another way of dealing with the different trajectories of the loads would be to change the elevation setting of the scope sight when one changes the load being used. What follows assumes, for sake of illustration, that the rifle sighted-in with the 150 grain load will shoot to exactly 2.80 inches elevation at 100 yards. (This assumes a scope with 100% accurate and repeatable adjustments, which hunting scopes very seldom have. -Editor)

Conformity of actual sight-in elevation to calculated elevation would be happenstance. The actual sight-in elevation would more likely be anywhere up to +/- 1/8 inch off of the calculated sight-in elevation, with a scope that adjusts in perfect 1/4 m.o.a. increments.

The trajectory calculator indicates that a 165 grain bullet will hit 2.65 inches high at 100 yards, when the rifle is sighted in for optimum MPBR with 150 grain loads. If the rifle is sighted in for optimum MPBR with 165 grain loads, the 100 yard sight-in elevation would be 2.82 inches. If one raises the elevation one click (0.25 inch at 100 yards), the adjustment would actually overshoot the optimal elevation change (.017 inch). The adjusted elevation setting would bring the 165 grain load closer to optimum, but it will not be right on.

Similarly, the 180 grain load is calculated to shoot to 2.49 inches of elevation at 100 yards, with the rifle sighted in for the 150 grain load. Meanwhile, the optimal 100 yard sight-in elevation for the 180 grain load is 2.87 inches, a difference of 0.38 inch between the 150 and 180 grain sight-in elevations. Should one adjust the scope one click (0.25 inch), or two (0.5 inch) in this case?

Issues with changing scope settings

I am not a fan of casually twirling the elevation or windage settings on a rifle scope, for several reasons. Basically, there is too much potential for making a mistake when one messes with a scope that has been sighted-in for a particular load. For instance, I have worked with scopes where the adjustment clicks were not crisp, so it was difficult to be sure how many increments of adjustment were being made.

As another example, suppose that I adjusted the elevation of a scope (say) three clicks when I switched from using one load to another that shoots on a significantly different trajectory, but then forget that I had made that adjustment when I subsequently switched back to the original load. I might not realize this oversight until I missed a shot at a game animal (or, worse, made a crippling hit), because the rifle was not shooting where I thought it should. I could go on, but I believe you get the idea.

I realize that so-called "tactical" rifle scopes, with easily accessible and readily adjustable turrets, have become somewhat popular on the shooting scene. To satisfy my own curiosity, I did a quick survey of these products, using the MidwayUSA website. MidwayUSA catalogs nearly 100 "tactical rifle scopes" (the search phrase I used), but most of these are not practical hunting scopes, because their magnification range is not right for hunting, their objective lenses are too large, they have cluttered reticles and the better brands and models are very expensive.

Only about one-third of the scopes listed had magnification ranges of 3x-9x, or lower. My view is that any big game hunting rifle that mounts a scope that has a low end magnification greater than 3x, and a high magnification greater than 9x, is wearing too much scope.

I say this for two reasons. First, most popular big game (Class 2 or 3) hunting cartridges have +/- 3 inch MPBR ranges that fall roughly between 250 and 300 yards, depending on the cartridge and load in question. A high end magnification of 9x is more than enough to get a clear sight picture on a deer or larger animal at those distances.

Second, it is likely that most commonly hunted game animals are taken at ranges of 100 yards or less. For these shorter range shots, a 2x to 4x magnification is more than adequate. My personal favorite deer rifle scopes are 1.5-5x or 2-7x, with objective lenses of no more than 33 mm diameter. (See Riflescopes for Hunting Class 2 Game for further thoughts on selecting a hunting scope.)

Most of the tactical rifle scopes I browsed have cluttered reticles, with things such as MOA dots or stadia marks on the cross hairs. A simple medium cross hair or duplex reticle, or one with an added center circle, works much better for game hunting scopes.

Conversely, a scope with MOA hashmarks, Mil-dots, or stadia lines on the crosshairs makes a certain sense on a varmint rifle that may be used to take extreme range (beyond MPBR) shots. The varmint hunter can use such reference markings to aid in estimating holdover for extreme range shots and in making hold-off adjustments in significant cross winds.

Scopes that have both some system of dots, lines, or tiny circles visible in the FOV and readily accessible and adjustable tactical turrets are, in my view, a redundancy. If you have one, why do you need the other?

If I were choosing a scope to mount on a varmint rifle that I expected to shoot a lot at extreme ranges, I would favor one that has MOA marks on the crosshairs, but not tactical turrets. The dots would help me make holdover and hold-off sighting adjustments and I would not be tempted to start spinning those turret dials.

Think about it, if one spent a day shooting over a prairie dog town, frequently adjusting turrets for different shot opportunities, the shooter will have no idea where the scope is pointing at the end of the day. To me, scopes with tactical turrets make no sense for either game or varmint hunting.

Another reason I am not enamored of tactical scopes is price. Fully one-half of the tactical scopes listed by MidwayUSA were priced from $500 up to$2400. Considering that comparable scopes, without the so-called tactical turrets and busy reticles, can be bought for roughly one-half to two-thirds of those prices, I cannot see the benefit of paying a high price for a scope on which I might change the elevation setting a couple times a season (if at all) and which has a poor reticle for hunting, to boot.

I reported the trajectory data above to two decimal places, partly because that is the level of detail returned by the program I used, but also because I wanted to show the differences in trajectory between loads as precisely as possible. However, measurement of groups shot on target will normally not be that precise. I measure groups to 1/10th inch accuracy. Any elevation or windage data that are more detailed than that come about by averaging multiple shot groups.

A related topic, already mentioned in passing, is that it is a fortuitous accident when the 100 yard sight-in elevation of a particular load exactly matches the elevation calculated via a ballistic trajectory program. More likely is that the best sight-in elevation attainable may be as much as 1/8 inch higher or lower than the calculated elevation. The same applies to getting the windage of a load exactly on line. This is because most hunting scopes are designed to adjust elevation in 1/4 m.o.a. increments.

There are several other rifle, ammunition and shooter related variables that can frustrate any attempt to get a particular load to consistently shoot to a given point of impact at a particular range. Unfortunately, discussion of these is beyond the scope of this article.

Perhaps the most important additional thought I can share is that computer generated ballistic data should always be verified by actual shooting. It might be tempting to do a 100 yard sight-in with a particular load and then trust that where the ballistic program says that load, and others that might be used interchangeably, will hit at longer ranges is correct.

However, I am never fully confident of where my bullets will hit at extended ranges until I have shot some test groups. Generally the results are close to what the trajectory table indicates they should be, but occasionally test groups shot at 200 yards with the sight-in load will be off enough to prompt a one click adjustment (1/2 inch at 200 yards) in the elevation or windage of the scope.

The bottom line is that the only way to be sure how a particular rifle, load and shooter will interact to place bullets at any given distance is by shooting. There is no substitute for trigger time to gain confidence in how your rifle shoots particular loads at various ranges. This is especially important for the hunter, because shooting positions, accuracy and consistency in the field are very different from test firing from the shooting bench. The article The Personal Range Limit is especially instructive on this point.

It is probably clear where I stand on the issue of sighting-in a rifle when shooting significantly different loads. I favor MPBR sight-in for the load that I am likely to shoot the most, then calculating trajectories of and test firing other loads that I might occasionally use. Once I have verified how those other loads fly with the primary load sight-in, I write a note card summarizing the key ballistic information for each load and tuck it in the rifle case. This way I can quickly refresh my memory on how any load I use will perform. (Some hunters tape a note card with such information to their rifle stock, although I have never gone that far.)

I will not criticize those who choose to adjust their scope setting for different loads, if this is their preference. I do not go there, because I would rather keep things as simple as possible, even though it means I am not shooting loads, other than my primary one, to their optimum MPBR.

Are our textbooks wrong? Astronomers clash over Hubble's legacy

Images of Galactic nebulae and a supernova remnant that were obtained via the Hubble Space Telescope, which is named after astronomer Edwin Hubble. The honor was bestowed upon E. Hubble given his seminal contributions to astronomy. Credit: spikedrocker/deviantart

Edwin Hubble's contributions to astronomy earned him the honor of having his name bestowed upon arguably the most famous space telescope (the Hubble Space Telescope, HST). Contributions that are often attributed to him include the discovery of the extragalactic scale (there exist countless other galaxies beyond the Milky Way), the expanding Universe (the Hubble constant), and a galaxy classification system (the Hubble Tuning Fork). However, certain astronomers are questioning Hubble's pre-eminence in those topics, and if all the credit is warranted.

"[The above mentioned] discoveries … are well-known … and most astronomers would associate them solely with Edwin Hubble yet this is a gross oversimplification. Astronomers and historians are beginning to revise that standard story and bring a more nuanced version to the public's attention," said NASA scientist Michael J. Way, who just published a new study entitled "Dismantling Hubble's Legacy?"

Has history clouded our view of Hubble the man? Or are his contributions seminal to where we are today in astronomy?

Assigning credit for a discovery is not always straightforward, and Way 2013 notes, "How credit is awarded for a discovery is often a complex issue and should not be oversimplified – yet this happens time and again. Another well-known example in this field is the discovery of the Cosmic Microwave Background." Indeed, controversy surrounds the discovery of the Universe's accelerated expansion, which merely occurred in the late 1990s. Conversely, the discoveries attributed to Hubble transpired during the

Prior to commencing this discussion, it's emphasized that Hubble cannot defend his contribution since he died long ago (1889-1953). Moreover, we can certainly highlight the efforts of other individuals whose seminal contributions were overlooked without mitigating Hubble's pertinence. The first topic discussed here is the discovery of the extragalactic scale. Prior to the 1920s it was unclear whether the Milky Way galaxy and the Universe were synonymous. In other words, was the Milky Way merely one among countless other galaxies?

Astronomers H. Shapley and H. Curtis argued the topic in the famed Island Universe debate (1920). Curtis believed in the extragalactic Universe, whereas Shapley took the opposing view (see also Trimble 1995 for a review). In the present author's opinion, Hubble's contributions helped end that debate a few years later and changed the course of astronomy, namely since he provided evidence of an extragalactic Universe using a distance indicator that was acknowledged as being reliable. Hubble used stars called Cepheid variables to help ascertain that M31 and NGC 6822 were more distant than the estimated size of the Milky Way, which in concert with their deduced size, implied they were galaxies. Incidentally, Hubble's distances, and those of others, were not as reliable as believed (e.g., Fernie 1969, Peacock 2013). Peacock 2013 provides an interesting comparison between distance estimates cited by Hubble and Lundmark with present values, which reveals that both authors published distances that were flawed in some manner. Having said that, present-day estimates are themselves debated.

Hubble's evidence helped convince even certain staunch opponents of the extragalactic interpretation such as Shapley, who upon receiving news from Hubble concerning his new findings remarked (1924), "Here is the letter that has destroyed my universe." Way 2013 likewise notes that, "The issue [concerning the extragalactic scale] was effectively settled by two papers from Hubble in 1925 in which he derived distances from Cepheid variables found in M31 and M33 (Hubble 1925a) of 930,000 light years and in NGC 6822 (Hubble 1925c) of 700,000 light years."

However, as table 1 from Way 2013 indicates (shown below), there were numerous astronomers who published distances that implied there were galaxies beyond the Milky Way. Astronomer Ian Steer, who helps maintain the NASA/IPAC Extragalactic Database of Redshift-Independent Distances (NED-D), has also compiled a list of 290 distances to galaxies published before 1930. Way 2013 added that, "Many important contributions to this story have been forgotten and most textbooks in astronomy today, if they discuss the "Island Universe" confirmation at all, bestow 100% of the credit on Hubble with scant attention to the earlier observations that clearly supported his measurements."

Way 2013 notes, “Table 1 lists all of the main distance estimates to spiral nebulae (known to this author) from the late 1800s until 1930 when standard candles began to be found in spiral nebulae [galaxies].” Credit: Way 2013/arXiv

Thus Hubble did not discover the extragalactic scale, but his work helped convince a broad array of astronomers of the Universe's enormity. However, by comparison to present-day estimates, Hubble's distances are too short owing partly to the existing Cepheid calibration he utilized (Fernie 1969, Peacock 2013 also notes that Hubble's distances were flawed for other reasons). That offset permeated into certain determinations of the expansion rate of the Universe (the Hubble constant), making the estimate nearly an order of magnitude too large, and the implied age for the Universe too small.

Hubble's accreditation as the discoverer of the expanding Universe (the Hubble constant) has generated considerable discussion, which is ultimately tied to the discovery of a relationship between a galaxy's velocity and its distance. An accusation even surfaced that Hubble may have censored the publication of another scientist to retain his pre-eminence. That accusation has since been refuted, but provides the reader an indication of the tone of the debate (see Livio 2012 (Nature), and references therein).

Top, spectra for galaxies that are redshifted. Credit: JPL/Caltech/Planck

Hubble published his findings on the velocity-distance relation in 1929, under the unambiguous title, "A Relation Between Distance and Radial Velocity Among Extra-Galactic Nebulae". Hubble 1929 states at the outset that other investigations have sought, "a correlation between apparent radial velocities and distances, but so far the results have not been convincing." The key word being convincing, clearly a subjective term, but which Hubble believes is the principal impetus behind his new effort. In Lundmark 1924, where a velocity versus distance diagram is plotted for galaxies (see below), that author remarks that, "Plotting the radial velocities against these relative distances, we find that there may be a relation between the two quantities, although not a very definite one." However, Hubble 1929 also makes reference to a study by Lundmark 1925, where Lundmark underscores that, "A rather definite correlation is shown between apparent dimensions and radial velocity, in the sense that the smaller and presumably more distant spirals have the higher space velocity."

Hubble 1929 provides a velocity-distance diagram (featured below) and also notes that, "the data indicate a linear correlation between distances and velocities". However, Hubble 1929 explicitly cautioned that, "New data to be expected in the near future may modify the significance of the present investigation, or, if confirmatory, will lead to a solution having many times the weight. For this reason it is thought premature to discuss in detail the obvious consequences of the present results … the linear relation found in the present discussion is a first approximation representing a restricted range in distance." Hubble implied that additional effort was required to acquire observational data and place the relation on firm (convincing) footing, which would appear in Hubble and Humason 1931. Perhaps that may partly explain, in concert with the natural tendency of most humans to desire recognition and fame, why Hubble subsequently tried to retain credit for the establishment of the velocity-distance relation.

Hubble 1929 conveyed that he was aware of prior (but unconvincing to him) investigations on the topic of the velocity-distance relation. That is further confirmed by van den Bergh 2011, who cites the following pertinent quote recounted by Hubble's assistant (Humason) for an oral history project, "The velocity-distance relationship started after one of the IAU meetings, I think it was in Holland [1928]. And Dr. Hubble came home rather excited about the fact that two or three scientists over there, astronomers, had suggested that the fainter the nebulae were, the more distant they were and the larger the red shifts would be. And he talked to me and asked if I would try and check that out."

Hubble 1929 elaborated that, "The outstanding feature, however, is the possibility that the velocity-distance relation may represent the de Sitter effect, and hence that numerical data may be introduced into discussions of the general curvature of space." de Sitter had proposed a model for the Universe whereby light is redshifted as it travels further from the emitting source. Hubble suspected that perhaps his findings may represent the de Sitter effect, however, Way 2013 notes that, "Thus far historians have unearthed no evidence that Hubble was searching for the clues to an expanding universe when he published his 1929 paper (Hubble 1929b)." Indeed, nearly two decades after the 1929 publication, Hubble 1947 remarks that better data may indicate that, "redshifts may not be due to an expanding universe, and much of the current speculation on the structure of the universe may require re-examination." It is thus somewhat of a paradox that, in tandem with the other reasons outlined, Hubble is credited with discovering that the Universe is expanding.

The term redshift stems from the fact that when astronomers (e.g., V. Slipher) examined the spectra of certain galaxies, they noticed that although a particular spectral line should have appeared in the blue region of the spectrum (as measured in a laboratory): the line was actually shifted redward. Hubble 1947 explained that, "light-waves from distant nebulae [galaxies] seem to grow longer in proportion to the distance they have travelled It is as though the stations on your radio dial were all shifted toward the longer wavelengths in proportion to the distances of the stations. In the nebular [galaxy] spectra the stations (or lines) are shifted toward the red, and these redshifts vary directly with distance–an approximately linear relation. This interpretation lends itself directly to theories of an expanding universe. The interpretation is not universally accepted, but even the most cautious of us admit that redshifts are evidence either of an expanding universe or of some hitherto unknown principle of nature."

As noted above, Hubble was not the first to deduce a velocity-distance relation for galaxies, and Way 2013 notes that, "Lundmark (1924b): first distance vs. velocity plot for spiral nebulae [galaxies] …Georges Lemaitre (1927): derived a non–static solution to Einstein's equations and coupled it to observations to reveal a linear distance vs. redshift relation with a slope of 670 or 575 km/s/Mpc (depending on how the data is grouped) …" Although Hubble was aware of Lundmark's research, he and numerous other astronomers were likely unaware of the now famous 1927 Lemaitre study, which was published in an obscure journal (see Livio 2012 (Nature), and discussion therein). Steer 2013 notes that, "Lundmark's [1924] distance estimates were consistent with a Hubble constant of 75 km/s/Mpc [which is close to recent estimates]." (see also the interpretation of Peacock 2013). Certain distances established by Lundmark appear close to present determinations (e.g., M31, see the table above).

So why was Hubble credited with discovering the expanding Universe? Way 2013 suggests that, "Hubble's success in gaining credit for his … linear distance-velocity relation may be related to his verification of the Island Universe hypothesis –after the latter, his prominence as a major player in astronomy was affirmed. As pointed out by Merton (1968) credit for simultaneous (or nearly so) discoveries is usually given to eminent scientists over lesser-known ones." Steer told Universe Today that, "Lundmark in his own words did not find a definite relation between redshift and distance, and there is no linear relation overplotted in his redshift-distance graph. Where Lundmark used a single unproven distance indicator (galaxy diameters), cross-checked by a single unproven distance to the Andromeda galaxy, Hubble used multiple indicators including one still in use (brightest stars), cross-checked with distances to multiple galaxies based on Cepheids variables stars."

Concerning assigning credit for the discovery of the expansion of the Universe, Way 2013 concludes that, "Overall we find that Lemaitre was the first to seek and find a linear relation between distance and velocity in the context of an expanding universe, but that a number of other actors (e.g. Carl Wirtz, Ludwik Silberstein, Knut Lundmark, Edwin Hubble, Willem de Sitter) were looking for a relation that fit into the context of de Sitter's [Universe] Model B world with its spurious radial velocities [the redshift]." A partial list of the various contributors highlighted by van den Bergh 2011 is provided below.

“The history of the discovery of the expansion of the Universe may be summarized [above],” van den Bergh 2011. Credit: van den Bergh/JRASC/arXiv

Way and Nussbaumer 2011 assert that, "It is still widely held that in 1929 Edwin Hubble discovered the expanding Universe … that is incorrect. There is little excuse for this, since there exists sufficient well-supported evidence about the circumstances of the discovery."

In sum, the author's personal opinion is that Hubble's contributions to astronomy were seminal. Hubble helped convince astronomers of the extragalactic distance scale and that a relationship existed between the distance to a galaxy and its velocity, thus propelling the field and science forward. His extragalactic distances, albeit flawed, were also used to draw important conclusions (e.g., by Lemaitre 1927). However, it is likewise clear that other individuals are meritorious and deserve significant praise. The contributions of those scientists should be highlighted in parallel to Hubble's research, and astronomy textbooks should be revised to emphasize those achievements A fuller account should be cited of the admirable achievements made by numerous astronomers working in synergy during the 1920s.

There are a diverse set of opinions on the topics discussed, and the reader should remain skeptical (of the present article and other interpretations), particularly since knowledge of the topic is evolving and more is yet to emerge. Two talks from the "Origins of the Expanding Universe: 1912-1932" conference are posted below (by H. Nussbaumer and M. Way), in addition to a talk by I. Steer from a separate event.

Primordial magnetic fields in the post-recombination era and early reionization

We explore the ways in which primordial magnetic fields influence the thermal and ionization history of the post-recombination Universe. After recombination, the Universe becomes mostly neutral, resulting also in a sharp drop in the radiative viscosity. Primordial magnetic fields can then dissipate their energy into the intergalactic medium via ambipolar diffusion and, for small enough scales, by generating decaying magnetohydrodynamics turbulence. These processes can significantly modify the thermal and ionization history of the post-recombination Universe. We show that the dissipation effects of magnetic fields, which redshifts to a present value B0= 3 × 10 −9 G smoothed on the magnetic Jeans scale and below, can give rise to Thomson scattering optical depths τ≳ 0.1 , although not in the range of redshifts needed to explain the recent Wilkinson Microwave Anisotropy Probe (WMAP) polarization observations. We also study the possibility that primordial fields could induce the formation of subgalactic structures for z≳ 15 . We show that early structure formation induced by nanoGauss magnetic fields is potentially capable of producing the early reionization implied by the WMAP data. Future cosmic microwave background observations will be very useful to probe the modified ionization histories produced by primordial magnetic field evolution and constrain their strength.

Acknowledgements

We thank C. Steidel, A. Shapley and T. Heckman for discussions. S.C.C. acknowledges support from NASA. I.R.S. acknowledges support from the Royal Society and a Philip Leverhulme Prize Fellowship. NRAO is operated by Associated Universities Inc., under a cooperative agreement with the US National Science Foundation. Data presented herein were obtained using the W. M. Keck Observatory, which is operated as a scientific partnership among Caltech, the University of California and NASA. The Observatory was made possible by the financial support of the W. M. Keck Foundation.