Why aren't neutron stars full of dark matter?

Why aren't neutron stars full of dark matter?

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Dark matter interacts with the gravitational force right? Well, unlike black holes, neutron stars are actually visible, and they're an enormous gravitational sink, so dark matter should collect to them.

But if all that is true, which it seems to be, why haven't astronomers detected or used neutron stars to detect dark matter?

Yes, neutron stars might actually accumulate weakly interacting dark matter and this allows some observational constraints on its nature. Basically, the temperature and continued existence of neutron stars places bounds on the density and interaction cross-section of dark matter.

A dark matter particle that does not interact with matter will just have its trajectory bent by the gravity field of a heavy object, so most unbound particles will just swoop past on a hyperbolic trajectory. But as discussed in (Adams & Laughlin 1997), if there is some matter-dark matter interaction then the particle may scatter from a matter particle and now have less than escape velocity. This way white dwarfs and neutron stars would indeed accumulate dark matter in their cores. The rate of accumulation is proportional to $ ho v sigma$ where $ ho$ is the dark matter density, $v$ the average relative velocity and $sigma$ the cross section. Adams & Laughlin estimate that a white dwarf star would accumulate its own mass in $10^{25}$ years, but this is going to be depend on the cross section (if it is too small the dark matter will pass through) which is at present unknown.

Were this accumulation the only thing happening it would eventually make white dwarfs and later neutron stars implode. However, dark matter is plausibly a mix of particles and antiparticles that annihilate each other at a rate $sim ho^2$; in an enriched environment like a white dwarf core this would produce energy from emitted photons heating things up. Adams & Laughlin estimate the luminosity as to about $10^{-12}L_odot$, which is imperceptible in the current era but would eventually keep white dwarfs at 63 K in the far future (until the dark matter halo runs out).

Other, more elaborate, calculations lead to accretion estimates that are higher. If the rate were high enough, then we would not see any cool dense objects - so white dwarf and neutron star cooling gives some bounds on the possible density and cross sections, albeit not very strict ones. For example, one model suggests that neutron stars would level out at 10,000 K. Cool star observations can also already rule out some dark matter models.

There are even some arguments that super-earth planets in dense dark matter halos might be heated significantly, although this may require unrealistically dense halos and big cross sections. The current heath flow of Earth does give some constraints on how strongly it can interact.

So neutron stars are not directly giving us dark matter detection, but they (and planets and white dwarfs) are giving us some information.

I have never seen any discussion of the interaction between any of the various Dark Matter candidates and neutron star matter. But we can still say something useful about the prospect.

First, remember that we don't know what Dark Matter (DM) is. We do have a number of theories that are reasonable extensions of the Standard Model which contains particles which behave sorta-kinda like we think DM behaves, but not only do we not have any good evidence for any of them, we have looked for most of them and have failed to find anything. The negative evidence falls well short of certain, but also suggests that there is something important we don't know yet.

At any rate, you're correct that DM ought to be attracted by the neutron star's (NS's) gravity, and it seems plausible that the DM would react with the NS's dense matter. But the only interactions that I'm aware of would release a bit of heat and a bit of electromagnetic radiation at the point of interaction. (DM particles aren't hugely energetic, and DM isn't very dense.) This would be promptly absorbed and result in an ultra-minuscule heating of the NS.

And neutron stars are far away. It is very difficult to see how we could hope to observe any effects of whatever interaction may be taking place.

I want to clarify a part of this question that some people may not understand. If dark matter literally were only affected by gravity, then you would not expect to see it collecting at the center of neutron stars. As a dark matter falls towards the center of a neutron star, it picks up speed until it passes on through the neutron star and starts to slow down. But when it leaves the vicinity of the neutron star it will have the same speed that it had when entering the vicinity. In order to collect dark matter, the neutron star has to slow it down somehow. This is what Anders Sandberg meant when he mentioned the interaction cross-section of dark matter. That refers to the probability of interactions that might slow down the dark matter particles enough for them to be trapped.

To put it a little differently, suppose a particle were far enough away from the neutron star so that the neutron star's gravity is can basically be ignored. Imagine that the particle is drifting towards the neutron star, so that eventually it will pass through it. Then, by the time the particle reaches the neutron star, it is mathematically guaranteed to be above escape velocity. The definition of escape velocity is the velocity that a particle will have if it falls from infinitely far away. The neutron star cannot trap anything because anything that drifts by is guaranteed to pick up precisely enough speed falling in so that, by the time it gets back out to the same distance it was before, it's moving away at the same speed it was moving in before.

To be fair, there is one caveat here. If the particles had significant gravity, so they could strongly influence each other, then they could interact in such a way that one particle flies out even faster, and the other one is trapped in orbit around (maybe also passing through) the neutron star. Some of Jupiter's moons may have been captured that way. But dark matter particles are thought to have negligible mass.

I am not saying anything different from Anders Sandberg here, but I just want to emphasize the importance of his statement that "most particles will just swoop on past."

Why aren't neutron stars full of dark matter?

Because dark matter doesn't consist of particles. There's something of a myth that it does, which I think comes from particle physicists who have never actually read Einstein's original material. I also think science is something of a competitive business, and there's a tendency for advocates to promote their own theory (eg WIMPs) and claim that a competitor theory (eg MOND) is flawed.

Dark matter interacts with the gravitational force right? Well, unlike black holes, neutron stars are actually visible, and they're an enormous gravitational sink, so dark matter should collect to them.

Remember that we have good scientific evidence for flat galactic rotation curves and other phenomena. These suggest that either a) there's some unseen "dark matter" around somewhere, or b) that gravity doesn't work quite the way that people think. However the evidence does not actually say dark matter is made out of particles and falls down.

But if all that is true, which it seems to be, why haven't astronomers detected or used neutron stars to detect dark matter?

Because we do not live in some Chicken Little world where the sky is falling in. I'm referring to Gullstrand-Painlevé coordinates which model a gravitational field as a place where space is falling down. Einstein rejected the idea, but some contemporary physicists take it seriously, see this for example.

Image credit Andrew Hamilton

Why is this relevant? Because in his 1916 Foundation of General Relativity Einstein said “the energy of the gravitational field shall act gravitatively in the same way as any other kind of energy”. This is spatial energy, and it isn't made of particles. The energy density of space near the Earth is greater than the energy density of space further away from the Earth. Because of this, there's a gravitational effect. This is why "gravity gravitates". Einstein also described a gravitational field as a place where space is "neither homogeneous nor isotropic". So dark matter might simply be inhomogeneous space. Don't forget that as per the raisin-cake analogy, the space between the galaxies expands whilst the space between the galaxies does not. Conservation of energy tells me that this will surely lead to an inhomogeneous spatial energy density. And that an older galaxy will be surrounded by a bigger/steeper halo of inhomogeneous space than a younger galaxy, so it will look like there's more dark matter present.

What Einstein said means that there's "dark matter" of sorts in the room you're in, right in front of your face. Only it isn't made of particles, and it isn't falling down. Instead it's made of space. Don't forget that space is dark, and there's a lot of about.

Are neutron stars blasting out dark matter?

One of the most irritating problems in astrophysics right now can be simply stated: What is dark matter?

More Bad Astronomy

Dark matter is similar to what we think of as normal matter — the kind we're made of — in that is has mass and therefore gravity. But it doesn't emit light and doesn't interact with normal matter, making it the devil's own game to detect. We have plenty of reasons to think it exists, but we've never been able to directly detect it.

A lot of potential dark matter candidates have been eliminated over the years, leaving only ever more exotic possibilities, like subatomic particles that have not yet been discovered.

The leading candidate there is the axion, predicted to exist by current quantum models. They don't emit light, don't interact with normal matter except through gravity, could be extremely abundant in the Universe, and are very elusive. They fit the bill!

So how do you find something that has never been found? You have to look at predicted effects of axions, and see if any of them are observable.

A neutron star is incredibly small and dense, packing the mass of the Sun into a ball just a few kilometers across. This artwork depicts one compared to Manhattan. Credit: NASA's Goddard Space Flight Center

Here's where neutron stars come in, the remnants of massive stars after they explode. While the outer layers of the star get blasted away, the core (up to about 2.8 times the mass of the Sun) collapses, forming an object mostly made of neutrons and only a couple of dozen of kilometers across. This makes them ridiculously dense, hot, magnetically supercharged, and the perfect oven in which to bake axions.

When protons or neutrons in the core of the neutron star pass close to each other, they release energy. Given that the temperature in the core of the star is several billion degrees, that energy can be quite high. Particles can be created out of that energy (that's what E=mc 2 is telling us matter can be converted to energy and vice-versa), and one kind of particle is predicted to be the axion, in copious numbers.

Despite the incredible density of a neutron star (a chunk the size of six sided die would outweigh all the cars in the US combined), the axion will blow right out of the star, because it really doesn't like to interact with matter. However, when it gets to the outside of the star, the immensely powerful magnetic field does affect it. The math is… difficult, but it should convert into energy, creating X-rays.

Dark matter is thought to have formed a huge web in the early Universe, like this model from a computer simulation, allowing galaxies to form along the filaments. Credit: Springel et al. / The Millennium Simulation Project

So, do we just point our X-ray telescopes at neutron stars and see if they emit them? Well, no, because neutron stars are so hot and powerful they emit X-rays all the time. But there's the cool bit: We can calculate how much X-ray emission we expect from them due to their being hot, and see if they emit a lot more than that. We look for an excess of X-rays. If we see it, that might be from axions!

Turns out there are seven neutron stars just perfect for this sort of thing. Called the Magnificent Seven, they're close by (all within roughly 2,000 light years of Earth), isolated, and cooling after their own creation event that is, being at the center of a supernova.

All of them have been observed by X-ray telescopes (specifically Chandra and XMM-Newton) and … all but one seem to have X-ray excesses. Hmmm.

Artwork depicting the Chandra X-ray Observatory. Credit: NASA/CXC & J.Vaughan

However, of course, there's a caveat. Two of them show clear excesses, more X-ray emission than expected to a statistically significant degree. Another has an excess but it's not as well established statistically. Two more have an excess that is even less significant, and the last has a deficit of X-rays.

Still, that's encouraging. Even better, the two neutron stars with strong excess seem to emit at about the amount predicted by the physicists' axion creation model. Hmmm again.

It's also possible that the ones with less solid excesses may just need to be observed more taking longer exposures makes it easier to see faint stuff. Also, neutron stars will emit different amounts depending on all sorts of things, including temperature, spin, magnetic field strength, and so on. So it's still possible they're all emitting axions but we can't detect the X-rays well enough yet.

Or, to be careful here, it may be that this has some other explanation entirely and it's not axions at all. Remember, at the moment axions are entirely theoretical. We have good reasons to believe they exist, but nothing has been proven yet.

Artwork depicting the magnetic field surrounding a neutron star. Credit: Casey Reed / Penn State University

But this X-ray excess avenue seems to be a good one to travel down. The team talks a bit about what kinds of further observations are needed to help, noting that the proposed European Athena observatory could help nail this down. That is due to launch sometime in the 2030s, so it'll be a while though.

Still, plenty to do before then to help work out the kinks in the observations. Interestingly, there's another way neutron stars can possibly make axions, but observations of that method have come up empty. Other experiments are searching for them too.

I hope they find them. Dark matter is fascinating, and its maniacal dislike of normal matter is amusing as an idea, but getting increasingly irritating observationally as time goes on. Finding them could solve a lot of issue in astrophysics and particle physics.

And if they discover axions don't exist? Well, that would be mighty interesting, too. That means something else is going on in the Universe, and a whole new set of hypotheses would have to be dreamed up to explain it. But for now, we'll stick to one so-far undetected mystery at a time.

Scientists Prove That Neutron Stars Aren’t Squishy

Spinning neutron stars emit jets of light that can sweep over the Earth like a spotlight.

A recent measurement of a rapidly spinning neutron star has disproved what was regarded as a credible picture of the star’s center. The centers of neutron stars are stiff and not squishy.

Neutron stars are the corpses of what were once large and active stars. These stars profligately burned through their nuclear fuel, exploding in a supernovae, leaving a small remnant behind.

And by small, I mean small. A typical neutron star has a mass of about 1.2 – 2.0 times the mass of the sun, but have a diameter about the size of a mid-sized city – say 12-ish miles, or 20 km. Most neutron stars have a mass of about 1.4 solar masses. The density of neutron star matter is astounding. Each cubic centimeter has a mass equivalent to a cube of rock about half a mile on a side (800 meters). Neutron star material is the densest known substance in the cosmos.

Astronomers know a fair bit about the structure of neutron stars. The outer crust is thought to be similar to what is found in white dwarfs, which is to say nuclei and electrons, although they are crushed together tighter than in ordinary matter. Going deeper, the force of gravity is sufficiently high that electrons are forced into the protons and become neutrons. That’s where the name “neutron star” comes from, and it is this part of the neutron star that has such impressive densities.

Going deeper into a neutron star, there is theoretical controversy as to what form matter takes under those conditions. One school of thought maintains that the core of a neutron star is simply more dense material, composed solely of neutrons. In this paradigm, the core is just regular neutron star stuff. However, there is another possibility. Some scientists believe that the extra pressure at the center of neutron stars is enough to break apart the neutrons and allow for the constituents of neutrons to mix freely together.

There Is Only One Other Planet In Our Galaxy That Could Be Earth-Like, Say Scientists

29 Intelligent Alien Civilizations May Have Already Spotted Us, Say Scientists

Explained: Why This Week’s ‘Strawberry Moon’ Will Be So Low, So Late And So Luminous

Neutrons are not fundamental particles. They consist of a witch’s brew of smaller particles called quarks and gluons. Ordinarily, quarks and gluons are solidly ensconced in protons and neutrons, but it is possible that high enough pressures would allow the quarks and gluons to intermingle. Scientists call this hypothetical form of matter “quark matter.” If the core of the star is quark matter, it leads to the surprising prediction that as the mass of a neutron star increases, its size will decrease, because the extra gravity will squish quark matter more.

The only way to settle this scientific controversy is to use observations of neutron stars themselves. No other laboratory in the universe is suitable. The approach that astronomers used was to measure their size.

Measuring the size of neutron stars is no easy feat. They are too small and too far away to directly measure their diameter. Instead, researchers need to use indirect measurements.

The NICER apparatus is located on the International Space Station.

An experimental collaboration sponsored by NASA called the Neutron star Interior Composition Explorer (NICER) has developed a method for measuring the size of neutron stars. They watch neutron stars as they rotate. Despite being electrically neutral, neutron stars have very strong magnetic fields which leave the surface of the star in a specific place. These strong magnetic fields are responsible for the emission of x-rays. Essentially, some neutron stars emit a “searchlight” kind of beam of x-rays into space as they rotate. Because of the pulsating signal, such neutron stars are called “pulsars.”

The NICER apparatus is able to time the searchlight with a precision of less than a millionth of a second. Combining that capability with the ability to measure the spectrum of x-rays, astronomers are able to reconstruct the gravitational potential of the star and, from that, the star’s size. It’s a remarkable achievement.

The NICER collaboration’s first success occurred in 2019, when they studied a pulsar called PSR J0030. It is located about 100 light years away from Earth in the direction of the constellation Pisces. PSR J0030 has a mass of about 1.4 times that of the sun and its diameter was determined to be about 26 kilometers.

To explore the question of the composition of the stellar core of neutron stars, astronomers needed to repeat the measurement for a larger star. They selected PSR J0740, which is a pulsar about 400 lightyears away. It is also more massive – with a mass of about two solar masses. This extra mass should be able to explore the size of neutron stars as a function of mass and resolve the question of whether the core of the star is made of quark matter or not.

The NICER collaboration measured the size of PSR J0740 and determined that its size was in the range of 25 – 27 kilometers, or about the same as the lighter pulsar. This observation strongly disfavors the idea that the core of neutron stars is made of quark matter. Instead, it appears to be neutrons all the way down.

Our ability to understand the innards of neutron stars is becoming exciting. Gravitational wave observatories like the Laser Interferometry Gravitational-wave Observatory (LIGO) has measured what happens when neutron stars collide, giving us insight into the nature of neutron star matter. And a recent measurement at the Thomas Jefferson National Laboratory has also shed light on the stiffness of nuclear matter. When observations like those are combined by the capabilities of the NICER observatory, astronomers with an interest in the behavior of stars at the end of their lives will have a lot to study, as we enter a Golden Age of neutron star astronomy.

Neutron stars, not dark matter, may explain Milky Way’s gamma-ray excess

Studies by two independent groups from the US and the Netherlands indicate that the observed excess of gamma rays from the inner galaxy likely comes from a new source rather than from dark matter. The best candidates are rapidly rotating neutron stars, which will be prime targets for future searches. The Princeton/MIT group and the Netherlands-based group used two different techniques, non-Poissonian noise and wavelet transformation, respectively, to independently determine that the gamma ray signals were not due to dark matter annihilation. Image credit: Christoph Weniger, UvA. © UvA/Princeton. Bursts of gamma rays from the centre of our galaxy are not likely to be signals of dark matter but rather other astrophysical phenomena such as fast-rotating stars called millisecond pulsars, according to two new studies, one from a team based at Princeton University and the Massachusetts Institute of Technology and another based in the Netherlands at the University of Amsterdam.

Previous studies suggested that gamma rays coming from the dense region of space in the inner Milky Way galaxy could be caused when invisible dark matter particles collide. But using new statistical analysis methods, the two research teams independently found that the gamma ray signals are uncharacteristic of those expected from dark matter. Both teams reported the finding in the journal Physical Review Letters this week.

“Our analysis suggests that what we are seeing is evidence for a new astrophysical source of gamma rays at the centre of the galaxy,” said Mariangela Lisanti, an assistant professor of physics at Princeton. “This is a very complicated region of the sky and there are other astrophysical signals that could be confused with dark matter signals.”

The centre of the Milky Way galaxy is thought to contain dark matter because it is home to a dense concentration of mass, including dense clusters of stars and a black hole. A conclusive finding of dark matter collisions in the galactic centre would be a major step forward in confirming our understanding of our universe. “Finding direct evidence for these collisions would be interesting because it would help us understand the relationship between dark matter and ordinary matter,” said Benjamin Safdi, a postdoctoral researcher at MIT who earned his Ph.D. in 2014 at Princeton.

To tell whether the signals were from dark matter versus other sources, the Princeton/MIT research team turned to image-processing techniques. They looked at what the gamma rays should look like if they indeed come from the collision of hypothesised dark matter particles known as weakly interacting massive particles, or WIMPs. For the analysis, Lisanti, Safdi and Samuel Lee, a former postdoctoral research fellow at Princeton who is now at the Broad Institute, along with colleagues Wei Xue and Tracy Slatyer at MIT, studied images of gamma rays captured by NASA’s Fermi Gamma-ray Space Telescope, which has been mapping the rays since 2008.

Dark matter particles are thought to make up about 85 percent of the mass in the universe but have never been directly detected. The collision of two WIMPs, according to a widely accepted model of dark matter, causes them to annihilate each other to produce gamma rays, which are the highest-energy form of light in the universe.

According to this model, the high-energy particles of light, or photons, should be smoothly distributed among the pixels in the images captured by the Fermi telescope. In contrast, other sources, such as rotating stars known as pulsars, release bursts of light that show up as isolated, bright pixels.

The researchers applied their statistical analysis method to images collected by the Fermi telescope and found that the distribution of photons was clumpy rather than smooth, indicating that the gamma rays were unlikely to be caused by dark matter particle collisions.

Exactly what these new sources are is unknown, Lisanti said, but one possibility is that they are very old, rapidly rotating stars known as millisecond pulsars. She said it would be possible to explore the source of the gamma rays using other types of sky surveys involving telescopes that detect radio frequencies.

Douglas Finkbeiner, a professor of astronomy and physics at Harvard University who was not directly involved in the current study, said that although the finding complicates the search for dark matter, it leads to other areas of discovery. “Our job as astrophysicists is to characterise what we see in the universe, not get some predetermined, wished-for outcome. Of course it would be great to find dark matter, but just figuring out what is going on and making new discoveries is very exciting.”

According to Christoph Weniger from the University of Amsterdam and lead author of the Netherlands-based study, the finding is a win-win situation: “Either we find hundreds or thousands of millisecond pulsars in the upcoming decade, shedding light on the history of the Milky Way, or we find nothing. In the latter case, a dark matter explanation for the gamma ray excess will become much more obvious.”


New speed-of-sound interpolation

The starting point of the new interpolation method is to consider the squared speed of sound (_< m>^<2>) as a function of the baryon chemical potential μ, and use this quantity to construct all other thermodynamic functions, in particular the pressure p(μ). In practice, the speed of sound is first integrated from the CET matching point nCET = 1.1n0 to higher densities to give the baryon density

where μCET is the baryon chemical potential corresponding to the density nCET, that is, nCETn(μCET). This result is then further integrated to arrive at the pressure:

where pCETp(μCET).

The above relations must be solved numerically in general, but in the following simple case that we have implemented in our analysis, they may be dealt with analytically. Namely, we first take the sequence of Np pairs

with μ1 = μCET, (_<_

>=2.6) GeV and μi − 1 < μi < μi + 1 for all other i. We then construct a (_< m>^<2>) curve as a piecewise-linear function connecting these points that is, for each i = 1, …, Np − 1 and for μ ∈ [μi, μi + 1]

At the matching points μ1 and (_<_

>) , we require p and (_< m>^<2>) to match the corresponding values given by the CET and pQCD EoSs, respectively. In addition, we take n to be continuous at each matching point, but note that our construction allows for EoSs that mimic discontinuous first-order transitions arbitrarily closely.

For a given Np, we have Np − 2 independent matching chemical potentials μi and Np − 2 independent speed-of-sound points (<_<< m>,i>^<2>>) , from which one of both is determined through matching to the high-density EoS, leaving 2Np − 6 parameters for given low- and high-density EoSs. If we instead write this in terms of the number of interpolating segments NNp − 1, then the result becomes 2N − 4. This is one free parameter fewer than the number of free parameters needed to define a polytropic EoS composed of the same number of segments 16 .

The above procedure is used to construct individual EoSs by choosing N = 3, 4, 5 and then randomly picking values for the matching points μi, speeds of sound cs,i and the pQCD parameter XpQCD (ref. 38 ). The parameter values are taken from uniform distributions μi ∈ [μCET, 2.6 GeV], (<_<< m>,i>^<2>>in (0,1)) , XpQCD ∈ [1, 4], in addition to which we choose roughly the same number of the ‘hard’ or ‘soft’ nuclear EoSs of ref. 14 . Finally, we vary the extreme EoSs in the ϵp plane within each (_< m>^<2>) band plotted in our paper, to ensure that we satisfactorily probe the size of these regions. This leads to the ensemble studied above, which consists of

570,000 individual EoSs. Roughly 160,000 of these fulfil the astrophysical constraints described in the main text, while

70,000 of the allowed EoSs contain at least one first-order phase transition. We have carefully made sure that these ensemble sizes are sufficiently high that our results are stable with respect to increasing the number of EoSs.

Finally, we note that while the interpolation method described above is genuinely new, a number of related articles have recently appeared in which the NS matter EoS has been constructed starting from the speed of sound 39,40,41,42,43,44 . Although most of these works introduce a non-trivial ansatz function for the quantity, thus being more restrictive than our approach, in ref. 39 the speed of sound is allowed to behave in a more general way. The main difference between the EoS bands constructed in this reference and our current work originates from our high-density pQCD constraint, which effectively forces the EoS to be softer at high densities.

Comparison of different interpolations

To quantify the potential bias introduced into our results by the selection of the speed-of-sound interpolation method, we compare our EoS and MR ensembles to ones obtained with the following two schemes:

a piecewise polytropic interpolation of the pressure as a function of baryon density, (


a spectral interpolation of the adiabatic index (varGamma (p)=frac

>epsilon ><< m>p> ight]>^<-1>) in terms of Chebyshev polynomials

Both of these interpolation methods have been abundantly discussed in the literature 14,15,16,17,19,45,46 .

We construct the EoS bands corresponding to each of the three interpolation methods, implementing the astrophysical constraints listed in the main text. To make the EoS families comparable to each other, we not only make sure that the ensembles are of roughly similar size, but in addition choose the numbers of free parameters in the EoSs to be approximately equal. For the piecewise polytropic interpolation, we allow up to four independent segments 16 (amounting to five free parameters), while for the spectral interpolation proposed by Lindblom 45 we use Chebyshev polynomials of degree five (four free parameters). Finally, for the speed-of-sound interpolation, we use up to five independent segments (six free parameters) in this comparison. In each case, we randomly generate large ensembles of interpolation functions, ensure that the resulting EoSs are causal and thermodynamically consistent, and finally discard those EoSs that are in disagreement with the observational constraints introduced in the main text. Again, we add no explicit first-order transitions to the EoSs, but allow continuous transitions that are arbitrarily strong, thus closely mimicking discontinuous phase transitions.

Our conclusion from the comparison of the constructed EoSs (Extended Data Fig. 1a) is that the speed-of-sound and polytropic interpolations produce nearly identical results, while the spectral interpolation leads to a somewhat more constrained band. This is not surprising the spectral method does not build on piecewise-defined interpolating functions, so the resulting EoSs are smooth by construction and unable to describe very sharp and rapid changes in the EoSs.

The families of MR curves obtained by integrating the Tolman–Oppenheimer–Volkoff equations using the above three ensembles of EoSs also largely indicate agreement between the methods (Extended Data Fig. 1b). The minimal and maximal radii for a fixed mass agree well among the different interpolations, with the spectral interpolation occupying a slightly more restricted area for low-mass stars with M < 1.2M . The agreement between different interpolations also persists as a function of tidal deformability: constraining Λ(1.4M ) according to 70 < Λ(1.4M ) < 580 (ref. 19 ), we find that the different interpolations still give similar maximal radii as functions of the NS mass as long as M ≳ 1.4M . In particular, the maximal radii at M = 1.4M are in excellent quantitative agreement among the different interpolation methods, as is expected from the previously observed tight correlation between NS radii and tidal deformabilities 16 . Considering stars with smaller masses, we observe that the speed-of-sound and piecewise polytropic interpolations allow EoSs that are extremely hard at low densities, leading to large radii R ≈ 14 km for MM , but rapidly soften at larger densities, such that for M = 1.4M the radii are smaller and consistent with the upper limits for tidal deformability. Again, because the spectral method leads to smoother interpolations, it is natural that it does not allow these rapidly changing EoSs.

Another difference between the interpolation schemes is that the polytropic interpolation does not allow for as massive stars as the other two. We attribute this to the fact that to achieve very large maximal masses, the EoS needs to stay very stiff, cs ≈ 1, throughout an extensive density window, which is difficult to realize with polytropic interpolation functions. This difference between different interpolations is somewhat ameliorated when upper limits are placed on the tidal deformability.

Polytropic index and its relation to the phase of QCD matter

As stated in the main text, our criterion for identifying the phase of QCD matter in NS cores is based on analysing the behaviour of the polytropic index (gamma equiv < m>(>,p)/< m>(>,epsilon )) , that is, the slope of the EoS in Fig. 1 and Extended Data Figs. 1a, 2 and 3. Here, we comment on the physics behind this statement, and explain our choice to identify the presence of quark matter using as the quantitative criterion γ < 1.75 continuously up to asymptotic densities.

Matter that exhibits exact conformal symmetry, that is, it does not possess intrinsic mass scales, is characterized by γ = 1, independent of the strength of the coupling. This is so because, in the absence of any dimensionful parameters, the energy density and pressure must be proportional to each other, leading to γ = 1. This symmetry can also be shown to lead to a speed of sound squared of (_< m>^<2>=1/3) for the system.

In low- and moderate-density QCD matter, it is well known that the ground state does not exhibit the approximate chiral symmetry of the underlying Lagrangian (see, for example, ref. 47 for details). This spontaneous breaking of the symmetry leads to the emergence of the fundamental scales of nuclear matter, such as hadron masses, and scale-dependent interactions. These mass scales lead to a highly non-conformal behaviour for the EoS, which is reflected in the polytropic index taking large values, typically in excess of 2, in viable models of high-density nuclear matter.

Collections of γ values predicted by different nuclear physics models are available through the related adiabatic index (varGamma =frac gamma) , tabulated for example in table III of ref. 48 and plotted in fig. 5.9 of ref. 49 . A closer inspection of the wide class of EoSs we have gathered from refs. 9,20,21 shows that, although there are a number of EoSs for which the polytropic index reaches values of order 1.5 or below, all of these are in conflict with the recent LIGO/Virgo tidal deformability bound—a constraint that in particular rules out typical hyperonic EoSs. For the viable hadronic EoSs we have analysed, γ stays around or above 1.75 in all cases except for MPA1, for which the parameter can drop to

1.6 at very high densities. This EoS, however, exhibits a speed of sound squared extremely close to unity exactly when γ falls below 2. In addition to casting doubt on its reliability, this fact highlights the lack of overlap between its high-density behaviour and that of our family of interpolated EoSs.

In high-density quark matter, on the other hand, the underlying approximate chiral symmetry of QCD is restored and, as a result, the system exhibits approximate conformal symmetry. Minor violations of conformal behaviour arise from the masses of the up, down and strange quarks these are, however, very small compared to the nucleon masses. Moreover, the interactions between quarks and gluons also lead to a mild breaking of conformal symmetry in the quark-matter phase, manifesting as a logarithmic dependence of the strong coupling constant on the baryon chemical potential. To a good accuracy, however, quark matter always behaves as an ultrarelativistic gas of interacting quarks and gluons, which becomes even more pronounced in the controlled, perturbative high-density region of the QCD phase diagram, where the polytropic index γ quickly approaches unity.

In the most non-trivial density range near the deconfinement transition, QCD matter evolves from the highly non-conformal hadronic behaviour to one characteristic of quark matter. This transition may take place either as a discontinuous jump in energy density, in which case the value γ = 1.75 may never be reached, or in a smooth crossover manner, whereby a crisp phase identification may not always be feasible (then there may even exist an overlap region where the system can be described both in terms of nuclear and quark degrees of freedom). In either case, our results indicate that the cores of typical pulsars and maximal-mass NSs very probably do not reside here, but rather safely belong to the nuclear and quark matter regimes, respectively. This is reflected in the fact that our qualitative conclusions are not sensitive to the exact choice of the critical polytropic index as long as it resides between the hadronic and quark matter regimes. Indeed, only our detailed numerical conclusions would be somewhat modified should we vary the number γ = 1.75 in a moderate way.

Analysis of EoS smoothness

In addition to reaching large speeds of sound, one way in which some of the EoSs generated by the speed-of-sound interpolation method can be classified as extreme is that the piecewise nature of the interpolation functions allows for very quick changes in the material properties of the matter in arbitrarily small density windows. Although such versatility is, in principle, a desirable feature of the interpolator, these structures are not very likely to appear in nature, and in addition bring unnecessary complications to the polytropic-index analysis performed in the main text. To quantify the level of local structure in our EoSs, we classify them according to the smallest (logarithmic) energy-density interval where structures appear. In practice, this is implemented by demanding that the energy densities at two successive inflection points, ϵi and ϵi + 1, where the speed of sound changes its behaviour, satisfy ((_-_)/_>Delta mathrm,epsilon) with a given constant (Delta mathrm,epsilon >0) . Note that imposing this constraint does not exclude discontinuous first-order phase transitions or rapid crossovers.

As demonstrated in Extended Data Fig. 2, we find that placing minor smoothness limits ( (Delta mathrm,epsilon lesssim 1) ) affects the allowed EoS region mainly around the matching points, where the EoS is best known, but does not have a significant effect at intermediate densities. However, somewhat larger values ( (Delta mathrm,epsilon gtrsim 1) ) begin to significantly constrain the allowed region at all densities. This shows that the EoSs that make up the boundaries of the EoS band must exhibit both very large speeds of sounds as well as rapid changes in material properties.

For our analysis of the central values of γ in stars of different masses, we have used (Delta mathrm,epsilon >0.5) , which has a minor effect on the global characteristics of the EoS family. In particular, this cut has a minor effect on the MR relation for stars above 1.4M , shifting the extremes of the allowed radius for a given mass by

0.3 km at most. Moreover, we note that, for completeness, we have also allowed the first inflection point to approach nCET without limit, finding that all the results presented in the main text remain unchanged.

Comparison with recent mass and radius constraints

Our ensemble of EoSs can also be transformed to the MR plane to compare its behaviour to recent radius observations of NSs. In Extended Data Fig. 5, we overlay a few representative X-ray MR measurements on top of our family of MR curves, obtained from the EoS ensemble of Fig. 1. We show examples of measurements obtained with three different methods: direct atmosphere-model fits to the time-evolving X-ray burst spectra (corresponding to the low-mass X-ray binary (LMXB) system 4U 1702–429 (yellow curve)) 30 , cooling-tail method fits to X-ray burst observations (LMXBs 4U 1724–307 (light brown) and SAX J1810.8–2609 (cyan)) 31 and quiescent LMXB spectra fits to sources with reliable distance measurements (NGC 6304 (dark brown), NGC 6397 (green), M13 (purple), M28 (orange), M30 (black), ω Cen (magenta), 47 Tuc X5 (blue) and 47 Tuc X7 (red)) 32,33,34,35,36,37 . For the quiescent LMXB measurements we use public data from refs. 36,37 . We assume, for simplicity, no hot spots. The corresponding atmospheres are assumed to be composed of either hydrogen (dotted lines) or helium (dashed lines), depending on the source.

In general, this kind of qualitative comparison remains largely inconclusive and warrants a further quantitative treatment taking into account the full interplay between different measurements and their uncertainties. That being said, we note that the measurements are consistent with the lower (_< m>^<2>) values as the measured radii are typically around R ≈ 12 km. This is especially true for the most precise MR measurement concerning the NS in 4U 1702−429, which is fully compatible with the subconformal-EoS region, where (_< m>^<2><1/3) . We emphasize that this particular selection of measurements presented in Extended Data Fig. 5 is by no means meant to be exhaustive. A more detailed self-consistent Bayesian treatment of the problem with all the available measurements present in the literature is left for future work.

Dark Matter in Neutron Stars

In a neutron star the Coriolis force induces Rossby waves, just like on Earth. These are waves with very long wavelength -- like halfway around the Earth -- and very large volume but very little amplitude, like fifty meters. On Earth they have a big effect on climate, with El Nino and so forth.

In a neutron star Rossby waves cause the emission of gravity waves. Not only that, the gravity waves reinforce the Rossby waves with a positive feedback. This would result in so much gravity waves that rotational energy would be lost quickly, but this does not seem to be the case. The best bet is that shear viscosity dampens the waves, but a superfluid core is not very viscous and there does not seem to be enough viscosity.

It has been hypothesized that dark matter could supply the viscosity. Dark matter has a long free path, which results in shear viscosity. Dark matter would of course tend to concentrate in neutron star cores. If no other explanation can be found.

I've got the referneces . none of this is original with me . but I have a dental appointment so that is going to have to wait.

Why Neutron Stars, Not Black Holes, Show The Future Of Gravitational Wave Astronomy

All massless particles travel at the speed of light, including the photon, gluon and gravitational waves, which carry the electromagnetic, strong nuclear and gravitational interactions, respectively. Image credit: NASA / Sonoma State University / Aurora Simonnet.

And perhaps most spectacularly, we can bring the electromagnetic and gravitational-wave skies together for the first time. Even though LIGO has seen more merging black holes, the fact is that there are more merging neutron stars. The key, now, is finding them. We live at a moment where gravitational wave astronomy is just in its infancy, giving us a whole new way to look at the Universe.

The galaxy NGC 4993, located 130 million light years away, had been imaged many times before. But just after the August 17, 2017 detection of gravitational waves, a new transient source of light was seen: the optical counterpart of a neutron star-neutron star merger. Image credit: P.K. Blanchard / E. Berger / Pan-STARRS / DECam.

More like this

So they shifted the frequencies of the gravitational waves to the audio so that you can 'hear' the collision.

Do you think this is helpful or misleading?
*and was what you hear real time?"

@Steve Blackband #1: In fact, there isn't any shifting involved! LIGO's sensitivity range is right around 300 Hz (spanning a few tens of hertz up to a few kilohertz), which is exactly the range of human hearing (middle C is 256 Hz).

IF gravity traveled at the speed of light, how do you explain the actual orbits of planets around the sun? Interesting things happens if you put your propagation effects at c. Unfortunately stable planetary orbits is not one of them, orbital calculations depend upon gravity being much faster than c to nearly instantaneous in order to work at all. The Earth also orbits a location that is approximately 20 arc seconds ahead of where the sun appears in the sky, where it actually is, not where the light which we are just receiving some eight minutes later shows it to be.
Lets get skeptical and use Occam's razor to clear the air a bit instead:
What did LIGO actually detect? The laser wiggled (effect). The effect was attributed to gravity waves (cause). It is now admitted that what was detected was propagating at c. What was actually detected? Gravity waves traveling at the speed of c, which is in disagreement with our own planet's orbit, or just an electromagnetic effect to begin with? There is also the problem that gravity travels at c in Einstein's math only because he wanted it to, the equations are coordinate dependent, you get differing speeds for gravity unless you cherry pick your coordinates specifically to get c. There has been a lot of debate on this by physicists.
Ethan admitted the reason they could even 'detect' orbiting black holes to begin with is because they created such powerful gravitational waves. How much less powerful are the gravity waves of neutron stars? Magnitudes? Considerably? If LIGO had troubles with detecting black holes, would it not have even more trouble with detecting something far less massive? The sudden pivot to detecting orbiting neutron stars seems a bit suspect, combined with claims of c propagation for the gravity waves. In any case, something has got to give theoretically. I have no trouble picking an initial side. I'm siding with direct evidence of our own planetary motion requiring faster than c propagation of gravity, over a highly inferred and indirect evidence based off a theoretical template in a computer processed 'wiggle' of something much farther away.
This to me seems much like the point of contention with BICEP2. Polarized dust WAS actually found (effect). Assumptions were made about (cause) how and when it was polarized. The assumptions turned out to be wrong. The tragedy of BICEP2 wasn't that it didn't turn out the way they wanted, it was that not a single purportedly brilliant person on the entire research team, or even one of their financial backers asked the obvious question the entire projected revolved around "How will we be able to discern the causes of the polarization?"
The same obvious question is there again. "How do you know what is causing your laser to wiggle?" You have a connection between a subject (neutron stars) and your detector now, but is it what you want it to be? Or is it something more mundane?

@CFT #3: What an awesome demonstration of ignorance.

"Gravity" doesn't "travel at the speed of light," any more than a _static_ (i.e., unchanging) electric or magnetic field travels at the speed of light. If you actually studied the physics you so blithely distain, you'd already know this.

The gravitational field of the solar system is essentially fixed (the movement of the Sun's barycenter is tiny compared to the sizes of any of the planets' orbits). The same is true, of course, for the orbits of all the planetary moons about their primaries. Consequently, the orbit of the Earth and other planets is perfectly described by a Keplerian ellipse. In Newtonian language, we can say that the force between the Sun and Earth is instantaneous. In Einsteinian language, we say that the solar system metric is static to high precision. The observable outcome is the same in both cases, and in both cases can be derived quantitatively by someone who can handle the maths. Your statements demonstrate clearly that you either can't, or just don't want to because it doesn't fit your argument.

What does travel at the speed of light (as measured to within +/- half a part in 10^15) are _disturbances_ in gravity, such as gravitational waves.

The interferometer didn't "just wiggle," as your willful ignorance would have it. The mirrors (not the laser, another demonstration of your ignorance) moved in a very specific oscillatory pattern, with a frequency which increased over time in a very specific, continuous way. What is observed, measured, is not just the movement, but the very specific time history of that movement. Your sidestepping of those quantitative technical details (here as in so many other of your ranting posts) demonstrates that you are either ill-equipped to understand them, or you do understand them but deliberately confabulate in order to support a false narrative.

Uh, CFT, I think the compounding of events - Gravitational Wave, Gamma Ray Burst, and the correlated Optical detection pretty much seal it as a legitimate detection.

Going forward, there will either be more detection with all the parts, or not. The GRB within seconds of the gravitational wave detection is pretty strong evidence.

Dearest Michael,
You lost me the very moment you called me names. many months ago. I certainly would never take you seriously if you told me the sky was blue at this point. Incompetent experts such as yourself should be cleaning toilets until you develop a semblance of humility, not advising people on anything. Now please, go pound some sand.

@MK #4: " In Newtonian language, we can say that the force between the Sun and Earth is instantaneous. In Einsteinian language, we say that the solar system metric is static to high precision."

I really enjoyed that. Pick your language and your theoretical paradigm. Make your argument from that bias.

What happened to the scientific method?!

@MobiusKlein #5,
What their 'detection' reveals in terms of by optical and radio is not in contention, those are electromagnetic in nature. What they claim they are detecting on their laser seismograph is. The fact they are claiming detection of gravity waves at c means it could be something other than gravity waves. The math is not convincing me, as it does not stipulate gravity even travels at c unless you choose very specific parameters BICEP2 was claiming a positive detection at sigma 7. Do you know what the statistical odds of that being wrong are? 1 in 10 billion, and they were wrong.
Forgive me for my skepticism, but no, I don't believe their ridiculously high claims of their calculations certainty, they aren't very credible at this point. Blowhards like Michael Kelsey having such peculiar overreactions to being questioned only further convinces me something odd is going on.
If Michael actually was half as informed as he claims, he would have known the argument I was using to challenge the assertion of gravity waves in a vacuum traveling at light speed was actually not even mine, but someone else with considerably more expertise. In other words, he accused me of making up an actual position taken by one of the finest astrophysicists of the twentieth century who in fact worked alongside Einstein himself, A.S. Eddington.
“The statement that in the relativity theory gravitational waves are propagated with the speed of light has,
I believe, been based entirely upon the foregoing investigation but it will be seen that it is only true in a very
conventional sense. If coordinates are chosen so as to satisfy a certain condition which has no very clear geometrical importance, the speed is that of light if the coordinates are slightly different the speed is altogether
different from that of light. The result stands or falls by the choice of coordinates and, so far as can be judged,
the coordinates here used were purposely introduced in order to obtain the simplification which results from
representing the propagation as occurring with the speed of light. The argument thus follows a vicious circle.”
---A.S. Eddington, The Mathematical Theory of Relativity
As I said,
Cherry picked coordinates to get the predetermined results you want doesn't prove anything, except that it is just a math push determining your speed of gravity to be c. If you pick some other coordinates, your speed will vary.

@CFT #6 (?)
Go easy on Michael, after all he does work for you as a tax payer. Michael is an employee of a US Govt scientific institution and can provide vital input.
He can be a bit salty at times but that's understandable because of all the testosterone fostered penis measuring contest that has gone here in the past.

Men in the science community should follow some of the decent lessons from physical sports teams in our society where we disagree, give some hard hits, and then shake hands and move on. Holding long term grudges does no one ESPECIALLY the truth any justice.

If there is an undetected (by instruments, of course) omnipresent field/medium, that would explain how everything is connected and that direct connection would explain how the force of gravity could be instantaneous.Then we could throw out "spooky action at a distance" as a problem. which resulted in GR.

Notice the opening "If." Yes, I do theoretical physics too. even without credentials! (It's not illegal, except in mainstream physics) "The fabric of spacetime". so malleable in response to mass. is a story like "the Emporer's New Clothes." Only idiots can't see it. It's the ultimate hypocrisy in science today.

No mathematical or velocity or superluminal speed need be applied to what need not move to be ever present. While such a supposition supervenes even need for an initiating isolated theoretical all inclusive big 'bang[' singularity as a hypothetical a point of origin even such an singular originating coalescence required prior gravitational and space presence. Also, there are no "holes" in space and space is also not "black". Such seeming appearances result from telescopic limitations.
Since there are no "holes"in sqace -nor would they be sustained by the tremendous pressures within galaxies- what is being observed is other than as hypothesized and widely accepted. What is deemed to be "black" seems so, because so does space so appear. However space also has no color.'The APPARENT darkness is due to telescopic equipment limitati0ns unable to detect and record the full spe4ctrum and breadth and origins of spatial antecedents and content.

@Ragtag Media #9,
With due respect,
I'll change my stance on Michael when he changes his tone, and learns to argue his point, not talk down. Elitist snobbery is for badly behaved aristocrats, not civil servants.

You need to take a good look at ALL your past/present comments in this blog before blaming anyone for talking you down!

You are obviously someone who see all discussions as a fight for personal honor. I am pretty sure other readers of this blog prefer FRIENDLY DISCUSSIONs instead!

I think that comment alone is good enough reason to get you banned from this blog you need to realize well!

"I do not think those words mean what you think they mean." - Inigo Montoya (paraphrased)

Yesterday, in the Comments of the Week thread, you appeared to invite comments about the relationship between Science (primarily Physics in this blog) and Philosophy. So, at the risk of antagonizing those few remaining readers I have not yet irritated, the below is a brief reply. If I misinterpreted, please excuse this response.

Scientific Realism is, I think, better understood in a historical context. It originated as a response to Scientific Positivism/Empiricism, another interpretation of what distinguishes Science from other intellectual disciplines. That school of thought came about as the result of Einstein’s publishing of "On the Electrodynamics of Moving Bodies", his Special Theory of Relativity (SR) in 1905. Also published in 1905, "Does the Inertia of a Body Depend Upon Its Energy Content?", Einstein determined a relationship between mass and energy.

SR and E=mc^2 were Revolutionary Science, and criticized for violating Immanuel Kant's categoric scheme that was a (if not the) contemporary philosophic world view. At that time, and to many people now, the three-dimensionality of space, Euclidean geometry, and the existence of absolute simultaneity were thought to be needed to understand Nature, and none of them should be altered by empirical findings. SR delivered a shock to physicists and to scientifically minded philosophers, and did not just point out surprising new facts, nor merely require strange new concepts. It revealed a disturbing lack of clarity within familiar concepts, such as simultaneity and length.

This sets the stage for the logical positivists/empiricists. Their goal was to build a new-and-improved version of empiricism, one that would make the philosophy and the world safe for science. The central principle of logical empiricism is that any cognitively meaningfully statement must be either analytic or a claim about experience. I shall not present an exposition of what an analytic statement might be, but if you are due for some penance, feel free to ask. Science deals with facts, hence the "claim about possible experience".

Once science is rooted in just the facts, the meaningfulness of a statement depends upon verification. By verification the positivists mean a method for finding its truth or falsehood. Since to be cognitively meaningful is to be either true or false, if there is the right sort of method for testing truth or falsehood, the statement will be meaningful.

Okay. All good, clean fun, and useful in clearing away some of the underbrush of 19th century European philosophy. Clarity uber alles, eh?

Now try to shoehorn the Bag Model of Quark Confinement into a vision limited by observation. Assuming it to be true, not only are quarks unobservable, but cannot be individually observed. Is that model of (presumed) quark confinement science, or not?

Scientific Realists find ways to relax many of the constraints which the Positivists had insisted science embrace. The Realists can then say, as the Positivists cannot, that science theorizes about quarks the same way as it theorizes about macro-sized objects. So unobservable objects are, in this philosophical POV, not second-class citizens. Of course, there’s a price to be paid. The distinctive virtues that science attained, due to its tight connection to experience are forfeited, and metaphysical possibilities made possible.

Frank #14,
A personal attack is not how you respond to an answer you think is incorrect. If you disagree, say you disagree and explain why coherently, that's all.
Michael took the first shot. I responded. Every time Michael takes a kick at me under the table, I'm going to kick back twice as hard and try to make him reconsider his clumsy approach to disagreement.
The advice I gave him also wasn't arbitrary.
I've taken it myself.
I cleaned over 40 toilets a day when I was a teenager, I worked at a fancy hotel near Santa Barbara on the housekeeping staff, and yes it did teach humility, as I believe was my parents intention. When you come home from work exhausted and the skin on your hands is raw from having been in Brasso polish and cleaning solvent, and your back is killing you from having been bent over most of the day scrubbing and making beds, you realize this is all some people will ever know until they die, and you begin to look at the smug self-entitled folk of the world a little differently. I don't care what you know, or what your position is, If you talk down to me I'll suggest you to go scrub some toilets too. I've had to clean up after some of the biggest stars in Hollywood, quite literally, and in the process quickly lost any sense of awe I ever had with fame or celebrity. Cleaning up other people's shit does wonders for gaining perspective.

As I understand it, Einstein's General Relativity (GR) says that if our sun suddenly disappeared, it would take about 8 minutes for the lack of its gravitational effects to be felt by the Earth. However, the Sun has been in place for a long time, and it's gravitational effects have long since propagated throughout our galaxy and beyond. (As it moves with and within our galaxy the effects sweep along with that motion but so does our solar system.) (Recall that according to GR, the effects are a strong local warping of space-time that constrains the orbital motion of nearby objects. This warping may vary slightly as the Sun itself orbits the center of mass of the solar system, but a) this is not a large effe

I read at another site that the equations of GR have been used to calculate (numerically) the orbits of the planets of our solar system for the next 5 billion years, and they all stayed in the same orbits. (This assumed no outside interference.)

To continue where I was somehow interrupted:

a) this is not a significant effect compared to planet orbital radii and b) it is a cyclic effect which is part of the existing orbits.

I will also add, there is an immense amount of evidence for GR. E.G., our GSP system would not work accurately until GR calculations were used in it.

Any premise of orig9ns need incorporate adequate inducements for such an fcvcurrence.

Any premise of origins need incorporate original inducements.
Gravity as an all pervading static state is not reasonably viewed as having once been confined singularly with space when both are diverse in function and as such not be subject to an intense coalescence, apart from some prior inducing overpowering factor. able to resolve mutual functional virtual incompatibility.

@John #16: Not irritating at all! I wish I had a good enough grounding in philosophy (and the history of philosophy) to engage in a meaningful debate. On the other hand, this is an opportunity for me to learn about stuff I have had less touch with.

You used an example to make a point that I think is not quite correct. First, in describing empiricism (which I _think_ i my own philosophical basis) you wrote, "Science deals with facts, hence the 'claim about possible experience'."

Later, using the example of the quark model, you questioned whether, in dealing with "unobservable entities," it counted as (empirical) science at all. I think this misses the point. Scientific theory deals with "unobservable entities" all the time, in many fields. Atoms, electrons, electric and magnetic fields, various energy potentials, those are all _unobservable_ in the 18th century sense of directly impinging on our senses.

The test of whether a scientific theory is empirical or not (i.e., whether it is "really science") is whether it can be used to quantitatively _predict_ observable results. QCD uses the unobservable quarks and their interactions to make definite, quantitative predictions about things like the mass of the proton, lifetimes and decay channels of various particles (yes, there's a philosophical chain of inference involved), even predicting new particle states before they were observed.

Whether the underlying invisible entities are real or not is, philosophically, an ontological problem more pragmatically, it's a question of how well the theory actually works. We treat electric and magnetic fields as real, even though they're not directly observable to our senses, because electrodynamics works _perfectly_ every time it is applied to an observable situation, and has done so for a century and a half and counting. We treat atoms and electrons as real for the same reason. Perhaps quarks seem less real to you (and to most people) because they don't directly impact day to day life, and because QCD has only been around for fifty years?

Incompetent experts such as yourself should be cleaning toilets until you develop a semblance of humility, not advising people on anything. Now please, go pound some sand.

It's idiocy such as this that makes me happy that SWAB will cease to exist on SB (along with SB itself) at the end of the month. Enjoy Forbes, CFT.

". this is an opportunity for me to learn about stuff I have had less touch with."

My (limited) exposure to physicists persuades me that philosophy and physics are seldom pursued in tandem.

In re the scientific stature of unobservables: The logical positivists were able to integrate them into Science by the use of the analytic truths which have equal standing with observables (ref prior post) in their Received View of Theories where theoretical propositions such as the bag model of quark confinement are given a partial observation interpretation by inference to their relationship to other, observable phenomena.

In contrast, you, I, and all (LOL! both) physicists I know are Popper's intellectual children.

Look in the mirror snowflake, and then keep scrubbing. Acting snobby while blaming others is not a winning combo. You are going to have to learn to share the playground of ideas a bit, it doesn't belong to you.
The moment Ethan had his Forbes site, this one became an afterthought, a neglected stepchild he didn't really care about anymore, and he wanted out of. Guess why?$?

@John #24,
If you are truly Popper's intellectual children, you realize the importance of falsifiability. Inflation theory, String Theory can't be falsified, at all. Sabine Hossenfelder has pointed this out to Ethan several times. and he has agreed with her in personal discussion, but then glossed over it in his PR (things are fine) narrative on this and the Forbes site.

Yes, I am comfortable with the notion that a "scientific" theory can not be proved, but it can be falsified. That criterion of falsifiability means that to be a scientific theory, the theory can and should be tested by experiment.

But what does one say about a theory that has been repeatedly tested using techniques independent of each other, and the results (observations) of those tests have been as predicted by the theory? I don’t know about you, but sooner or later I’d say it's correct, or even True, with a capital "T". I'd only start to discount its truthfulness if there started to accumulate observations that fell sufficiently far outside the theory’s prediction that they couldn’t be honestly attributed to experimental error. The first time some weird anomaly was reported, I’d go back and try to rework the observations within the theory, not instantly drop the theory, wouldn't you? Of course you would! Good physical theories are not found in the bargain bin at Walmart.

If that paragraph didn't sound vaguely familiar, it should have. Newton's law of universal gravitation produced accurate predictions for more than 200 years. When the observations (empirical tests) of the orbit of Uranus were observed to deviate from the predictions derive from that theory, first too fast and then too slow, instead of dropping the theory, in 1846 a scientist, Urbain Le Verrier, proposed the deviations could be resolved within the Newtonian theory. He suggested that the gravity of a farther, unknown planet was disturbing Uranus’orbit. He crunched the numbers (although it must have been a bitch to do back then), and Galle and d'Arrest saw Neptune where the "rethought" Newtonian theory said it was supposed to be. It wasn't until the turn of the 20th century that the illumination provided by Newtonian Physics was to be clouded by the ultraviolet catastrophe and the curious case of the missing ether.

All theories operate successfully with domains. QM, SR and GR are physical theories that are less inaccurate than Newton's. They pass many more tests. Does that make them True? Technically no, but not only are all four scientific theories, you can safely rely on them every day.

I think you're overlooking some of Ethan's poste opinions about String Theory.

Factual cosmological antecedents are not entirely telescopically observable being only inferred from keen visual observations and collaborative deductions..

@John #27,
When the 'working story' you tell yourself can't be falsified, and you can just 'change it' to agree with whatever you like, you aren't going to be able to break out of your epi-cycles. This is why I don not like group-think consensus and despise the Bayesian mindset which is turning the scientific community into a self congratulatory echo-chamber of self referential paper writers.
There is good news for the future however:
Scientists eventually die, and someone else is going to come along who doesn't give a rat's ass about the political consensus of experts and their overly convoluted 'stories', and do something else.
This is what happened when a couple of bicycle builders from Ohio somehow managed to figure out what the rest of the scientific community could not.
Science advances one funeral at a time.

@Alby #29,
Try harder. Random word generators are only good for spam.

The detection of the gravitational waves produced by the merger of two neutron stars –GW170817– has allowed scientists to fix at 70 km/s per megaparsec * the value of the increase in speed of the expansion of the universe in the 130 million light years that separate us from the origin of said merger.
As these calculations approach the speed of light throughout the age of the universe, we can do the inverse calculation to determine the average increase in the velocity of expansion so that the observable universe is of the age stated by the Big Bang Theory.
The result is 300.000 km/s /(13.799/3,26) Mpc =70,820 km/s Mpc.


ScienceBlogs is where scientists communicate directly with the public. We are part of Science 2.0, a science education nonprofit operating under Section 501(c)(3) of the Internal Revenue Code. Please make a tax-deductible donation if you value independent science communication, collaboration, participation, and open access.

You can also shop using Amazon Smile and though you pay nothing more we get a tiny something.

Dark Matter Murder Mystery: Is Weird Substance Destroying Neutron Stars?

The mysterious substance that makes up most of the matter in the universe may be destroying neutron stars by turning them into black holes in the center of the Milky Way, new research suggests.

If astronomers successfully detect a neutron star dying at the metaphorical hands of dark matter, such a finding could yield critical insights on the elusive properties of material, scientists added.

Dark matter — an invisible substance thought to make up five-sixths of all matter in the universe — is currently one of the greatest mysteries in science. The consensus among researchers suggests that dark matter is composed of a new type of particle, one that interacts very weakly at best with all the known forces of the universe. As such, dark matter is invisible and nearly completely intangible, mostly detectable only via the gravitational pull it exerts. [8 Baffling Astronomy Mysteries]

Now, physicists suggest answers to the mystery of dark matter might lie in another puzzle, known as the missing pulsar problem.

A pulsar is a kind of neutron star, which is a super-dense remnant of a massive star left behind after dying in a gigantic explosion known as a supernova. Neutron stars can devour matter from companion stars, acts of cannibalization that make neutron stars give off pulses of radiation, earning such neutron stars the name pulsar.

According to current astrophysical and cosmological models, several hundred pulsars should be orbiting the supermassive black hole at the heart of the Milky Way. However, searches for these pulsars by looking for the radio waves they emit have so far come up empty-handed.

Now researchers suggest dark matter could destroy these neutron stars, transforming them into black holes.

Dark matter, like ordinary matter, is drawn to the gravity of other matter. The greatest concentration of normal matter in the Milky Way is at its center, so the greatest concentration of dark matter is there as well.

In a region of high dark matter density such as the heart of the Milky Way, an enormous amount of dark matter particles could accumulate in a pulsar, causing it to grow massive enough to collapse and form a black hole.

"It is possible that pulsars imploding into black holes may provide the first concrete signal of particulate dark matter," said study co-author Joseph Bramante, a physicist at the University of Notre Dame.

The models of dark matter that are most consistent with this idea, and with observations of pulsars seen outside the galactic center, are ones that suggest dark matter is asymmetric, meaning there is more of one kind of dark matter particle than its antiparticle counterpart. Normal matter is asymmetric as well — there are far more protons in the universe than anti-protons. (When a particle and its antimatter counterpart meet, they annihilate each other, releasing a burst of energy — a proof of Einstein's famous equation, E=mc2, which revealed mass can be converted to energy and vice versa.)

"For me, the most surprising result is that already existing models of dark matter could cause pulsars at the galactic center to collapse into black holes," Bramante told

If dark matter is asymmetric, this would be consistent with "why there is more matter than antimatter in the universe, and why there is five times more dark matter than visible matter," Bramante added.

The mass of the dark matter particle responsible for imploding pulsars in the galactic core might be 100 times lighter than an electron or heavier than 100 million protons. If dark matter is as massive as 100 million protons, it would take more than 1,000 times the energies capable at the LHC to create them, Bramante noted. This suggests that looking for an imploding pulsar in the centers of galaxies might be a more feasible way to learn about dark matter.

There might be other explanations for the missing pulsar problem. For instance, massive stars may form short-lived, highly magnetic pulsars known as magnetars in the galactic center rather than ordinary long-lived pulsars, perhaps because stars in the galactic core might be highly magnetized. The researchers are exploring how astronomers might identify whether a pulsar in the galactic core died because of dark matter, supporting their idea.

Bramante and his colleague Tim Lindendetailed their findings Oct. 10 in the journal Physical Review Letters.