Astronomy

How can cosmic inflation make an infinite universe homogeneous?

How can cosmic inflation make an infinite universe homogeneous?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

As it is explained in this video, one of cosmic inflation's observable effects is the homogeneity of our universe. Inflation allows two points on the different sides of the observable universe to be causally connected at some point in time, so they may exchange their mass densities and temperatures, which are then going to be about the same for both of them.

While I do understand, that this effect doesn't have to end on the edge of the observable universe and might go on for whatever distance the rate of inflation will make it to, I do not understand how this could be valid for the entire Universe if it is infinite. From my guessing, if the Universe is infinite, there will allways be two points, that was never causally connected, no matter how fast it expanded at the inflation period. That will mean, that, while the Universe should be homogeneous on the sufficiently large scales, it doesn't have to on the scales even larger.

I guess I either don't fully understand the idea of cosmic inflation or don't get the particular way the Universe is infinite in.


Inflation is used to explain why the observable universe is extremely homogeneous.

Without inflation, we can do the following crude calculation. The cosmic microwave background was formed about 300,000 years after the big bang, at a redshift of about 1100. Thus causally connected regions at the epoch of CMB formation would have a radius of $sim 300,000$ light years, which has now expanded by a factor of 1100 to be $3.3 imes 10^{8}$ light years in radius.

This can be compared with the radius of the observable universe, which is currently around 46 billion light years. This means that causally connected regions should only be $sim 4 imes 10^{-7}$ of the observable universe, or equivalently, patches of CMB of $sim 2$ degrees radius on the sky are causally connected. This is clearly not the case as the variations in the CMB are no more than about 1 part in $10^{5}$ across the whole sky.

Inflation solves this by allowing previously causally connected regions to inflate to become larger than the entire observable universe.

You appear to understand this quite well, so I am not entirely clear what your question is. We cannot know whether the entire universe is homogeneous, since we cannot measure it. The cosmological principle is an assumption that appears to hold approximately true in the observable universe, but need not apply to the universe as a whole. Indeed it is not absolutely true in the observable universe otherwise it would be quite uninteresting, containing no galaxies, clusters, or other structure. I think the only requirement on inflation is that it blows up a patch of causally connected universe so that it becomes much bigger than the observable universe at the current epoch.


What If Cosmic Inflation Is Wrong?

The earliest stages of the Universe, before the Big Bang, are what set up the initial conditions . [+] that everything we see today has evolved from.

E. Siegel, with images derived from ESA/Planck and the DoE/NASA/ NSF interagency task force on CMB research

All scientific ideas, no matter how accepted or widespread they are, are susceptible to being overturned. For all the successes any idea may have, it only takes one experiment or observation to falsify it, invalidate it, or necessitate that it be revised. Beyond that, every scientific idea or model has a limitation to its range of validity: Newtonian mechanics breaks down close to the speed of light General Relativity breaks down at singularities evolution breaks down when you reach the origin of life. Even the Big Bang has its limitations, as there's only so far back we can extrapolate the hot, dense, expanding state that gave rise to what we see today. Since 1980, the leading idea for describing what came before it has been cosmic inflation, for many compelling reasons. But recently, a spate of public statements has shown a deeper controversy:

  • In February, a group of theorists, including one of inflation's co-founders, claimed that inflation had failed.
  • The mainstream group of inflationary cosmologists, including inflation's inventor, Alan Guth, wrote a rebuttal.
  • This prompted the original group to dig in further, denouncing the rebuttal.
  • And earlier this week, a major publication and one of the rebuttal's co-signers highlighted and gave their perspective on the debate.

The expanding Universe, full of galaxies and complex structure we see today, arose from a smaller, . [+] hotter, denser, more uniform state.

C. Faucher-Giguère, A. Lidz, and L. Hernquist, Science 319, 5859 (47)

There are three things going on here: the problems with the Big Bang that led to the development of cosmic inflation, the solution(s) that cosmic inflation provides and generic behavior, and subsequent developments, consequences, and difficulties with the idea. Is that enough to cast doubt on the entire enterprise? Let's lay it all out for you to see.

Ever since we first recognized that there are galaxies beyond our own Milky Way, all the indications have shown us that our Universe is expanding. Because the wavelength of light is what determines its energy and temperature, then the fabric of expanding space stretches those wavelengths to be longer, causing the Universe to cool. If the Universe is expanding and cooling as we head into the future, then that means it was closer together, denser, and hotter in the past. As we extrapolate farther and farther back, the hot, dense, uniform Universe tells us a story about its past.

The stars and galaxies we see today didn't always exist, and the farther back we go, the closer to . [+] an apparent singularity the Universe gets, but there is a limit to that extrapolation.

NASA, ESA, and A. Feild (STScI)

We arrive at a point where galaxy clusters, individual galaxies or even stars haven't had time to form due to the influence of gravity. We can go even earlier, where the amount of energy in particles and radiation make it impossible for neutral atoms to form they'd immediately be blasted apart. Even earlier, and atomic nuclei are blasted apart, preventing anything more complex than a proton or neutron from forming. Even earlier, and we begin creating matter/antimatter pairs spontaneously, due to the high energies present. And if you go all the way back, as far as your equations can take you, you'd arrive at a singularity, where all the matter and energy in the entire Universe were condensed into a single point: a singular event in spacetime. That was the original idea of the Big Bang.

If these three different regions of space never had time to thermalize, share information or . [+] transmit signals to one another, then why are they all the same temperature?

If that were the way things worked, there would be a number of puzzles based on the observations we had.

  1. Why would the Universe be the same temperature everywhere? The different regions of space from different directions wouldn't have had time to exchange information and thermalize there's no reason for them to be the same temperature. Yet the Universe, everywhere we looked, had the same background 2.73 K temperature.
  2. Why would the Universe be perfectly spatially flat? The expansion rate and the energy density are two completely independent quantities, yet they must be equal to one part in 10 24 in order to produce the flat Universe we have today.
  3. Why are there no leftover high-energy relics, as practically every high-energy theory predicts? There are no magnetic monopoles, no heavy, right-handed neutrinos, no relics from grand unification, etc. Why not?

In 1979, Alan Guth had the idea that an early phase of exponential expansion preceding the hot Big Bang could solve all of these problems, and would make additional predictions about the Universe that we could go and look for. This was the big idea of cosmic inflation.

In 1979, Alan Guth had a revelation that a period of exponential expansion in the Universe's past . [+] could set up and provide the initial conditions for the Big Bang.

Alan Guth’s 1979 notebook, tweeted via @SLAClab

This type of expansion, exponential expansion, is different from what happened for the majority of the Universe's history. When your Universe is full of matter and radiation, the energy density drops as the Universe expands. As the volume expands, the density goes down, and so the expansion rate goes down, too. But during inflation, the Universe is filled with energy inherent to space itself, so as the Universe expands, it simply creates more space, and that keeps the density the same, and prevents the expansion rate from dropping. This, all at once, solves the three puzzles as follows:

  1. The Universe is the same temperature everywhere today because disparate, distant regions were once connected in the distant past, before the exponential expansion drove them apart.
  2. The Universe is flat because inflation stretched it to be indistinguishable from flat the part of the Universe that's observable to us is so small relative to how much inflation stretched it that it's unlikely to be any other way.
  3. And the reason there are no high-energy relics is because inflation pushed them away via the exponential expansion, and then when inflation ended and the Universe got hot again, it never achieved the ultra-high temperatures necessary to create them again.

By the early 1980s, not only did inflation solve those puzzles, but we also began coming up with models that successfully recovered a Universe that was isotropic (the same in all directions) and homogeneous (the same in all location), consistent with all our observations.

The fluctuations in the Cosmic Microwave Background were first measured accurately by COBE in the . [+] 1990s, then more accurately by WMAP in the 2000s and Planck (above) in the 2010s. This image encodes a huge amount of information about the early Universe

ESA and the Planck Collaboration

These predictions are interesting, but not enough, of course. For a physical theory to go from interesting to compelling to validated, it needs to make new predictions that can then be tested. It's important not to gloss over the fact that these early models of inflation did exactly that, making six important predictions:

  1. The Universe should be perfectly flat. Yes, that was one of the original motivations for it, but at the time, we had very weak constraints. 100% of the Universe could be in matter and 0% in curvature 5% could be matter and 95% could be curvature, or anywhere in between. Inflation, quite generically, predicted that 100% needed to be "matter plus whatever else," but curvature should be 0%. This prediction has been validated by our ΛCDM model, where 5% is matter, 27% is dark matter and 68% is dark energy curvature is still 0%.
  2. There should be an almost scale-invariant spectrum of fluctuations. If quantum physics is real, then the Universe should have experienced quantum fluctuations even during inflation. These fluctuations should be stretched, exponentially, across the Universe. When inflation ends, these fluctuations should get turned into matter and radiation, giving rise to overdense and underdense regions that grow into stars and galaxies, or great cosmic voids. Because of how inflation proceeds in the final stages, the fluctuations should be slightly greater on either small scales or large scales, depending on the model of inflation. For perfect scale invariance, a parameter we call n_s would equal 1 exactly n_s is observed to be 0.96.
  3. There should be fluctuations on scales larger than light could have traveled since the Big Bang. This is another consequence of inflation, but there's no way to get a coherent set of fluctuations on large scales like this without something stretching them across cosmic distances. The fact that we see these fluctuations in the cosmic microwave background and in the large-scale structure of the Universe — and didn't know about them in the early 1980s — further validates inflation.
  4. These quantum fluctuations, which translate into density fluctuations, should be adiabatic. Fluctuations could have come in different types: adiabatic, isocurvature, or a mixture of the two. Inflation predicted that these fluctuations should have been 100% adiabatic, which should leave unique signatures in both the cosmic microwave background and the Universe's large-scale structure. Observations bear out that yes, in fact, the fluctuations were adiabatic: of constant entropy everywhere.
  5. There should be an upper limit, smaller than the Planck scale, to the temperature of the Universe in the distant past. This is also a signature that shows up in the cosmic microwave background: how high a temperature the Universe reached at its hottest. Remember, if there were no inflation, the Universe should have gone up to arbitrarily high temperatures at early times, approaching a singularity. But with inflation, there's a maximum temperature that must be at energies lower than the Planck scale (

10 19 GeV). What we see, from our observations, is that the Universe achieved temperatures no higher than about 0.1% of that (

The final prediction of cosmic inflation is the existence of primordial gravitational waves. It is . [+] the only prediction to not be verified by observation. yet.

National Science Foundation (NASA, JPL, Keck Foundation, Moore Foundation, related) — Funded BICEP2 Program modifications by E. Siegel

So inflation has a tremendous number of successes to its name. But since the late 1980s, theorists have spent a lot of time cooking up a variety of inflationary models. They've found some incredibly odd, non-generic behavior in some of them, including exceptions that break some of the predictive rules, above. In general, the simplest inflationary models are based on a potential: you draw a line with a trough or well at the bottom, the inflationary field starts off at some point away from that bottom, and it slowly rolls down towards the bottom, resulting in inflation until it settles at its minimum. Quantum effects play a role in the field, but eventually, inflation ends, converting that field energy into matter and radiation, resulting in the Big Bang.

The Universe we see today is based on the initial conditions it began with, which are dictated, . [+] predictively, by which model of cosmic inflation you choose.

Sloan Digital Sky Survey (SDSS)

But you can make multi-field models, fast-roll models instead of slow-roll models, contrived models that have large departures from flatness, and so on. In other words, if you can make the models as complex as you want, you can find one that gives departures from the generic behavior described above, sometimes even resulting in departures from one or more of these six predictions.

The fluctuations in the CMB are based on primordial fluctuations produced by inflation. In . [+] particular, the 'flat part' on large scales (at left) have no explanation without inflation.

This is what the current controversy is all about! One side goes so far as to claim that because you can contrive models that can give you almost arbitrary behavior, inflation fails to rise to the standard of a scientific theory. The other side claims that inflation makes these generic, successful predictions, and that the better we measure these parameters of the Universe, the more we constrain which models are viable, and the closer we come to understanding which one(s) best describe our physical reality.

The shape of gravitational wave fluctuations is indisputable from inflation, but the magnitude of . [+] the spectrum is entirely model-dependent. Measuring this will put the debate over inflation to rest, but if the magnitude is too low to be detected over the next 25 years or so, the argument may never be settled.

The facts that no one disputes are that without inflation, or something else that's very much like inflation (stretching the Universe flat, preventing it from reaching high energies, creating the density fluctuations we see today, causing the Universe to begin at the same temperatures everywhere, etc.), there's no explanation for the initial conditions the Universe starts off with. Alternatives to inflation have that hurdle to overcome, and right now there is no alternative that has displayed the same predictive power that the inflationary paradigm brings. That doesn't mean that inflation is necessarily right, but there sure is a lot of good evidence for it, and many of the "possible" models that can be concocted have already been ruled out. Until an alternative model can achieve all of inflation's successes, cosmic inflation will remain the leading idea for where our hot Big Bang came from.


Contents

Around 1930, Edwin Hubble discovered that light from remote galaxies was redshifted the more remote, the more shifted. This was quickly interpreted as meaning galaxies were receding from Earth. If Earth is not in some special, privileged, central position in the universe, then it would mean all galaxies are moving apart, and the further away, the faster they are moving away. It is now understood that the universe is expanding, carrying the galaxies with it, and causing this observation. Many other observations agree, and also lead to the same conclusion. However, for many years it was not clear why or how the universe might be expanding, or what it might signify.

Based on a huge amount of experimental observation and theoretical work, it is now believed that the reason for the observation is that space itself is expanding, and that it expanded very rapidly within the first fraction of a second after the Big Bang. This kind of expansion is known as a "metric" expansion. In the terminology of mathematics and physics, a "metric" is a measure of distance that satisfies a specific list of properties, and the term implies that the sense of distance within the universe is itself changing. Today, metric variation is far too small an effect to see on less than an intergalactic scale.

The modern explanation for the metric expansion of space was proposed by physicist Alan Guth in 1979, while investigating the problem of why no magnetic monopoles are seen today. He found that if the universe contained a field in a positive-energy false vacuum state, then according to general relativity it would generate an exponential expansion of space. It was very quickly realized that such an expansion would resolve many other long-standing problems. These problems arise from the observation that to look like it does today, the Universe would have to have started from very finely tuned, or "special" initial conditions at the Big Bang. Inflation theory largely resolves these problems as well, thus making a universe like ours much more likely in the context of Big Bang theory.

No physical field has yet been discovered that is responsible for this inflation. However such a field would be scalar and the first relativistic scalar field proven to exist, the Higgs field, was only discovered in 2012–2013 and is still being researched. So it is not seen as problematic that a field responsible for cosmic inflation and the metric expansion of space has not yet been discovered. The proposed field and its quanta (the subatomic particles related to it) have been named the inflaton. If this field did not exist, scientists would have to propose a different explanation for all the observations that strongly suggest a metric expansion of space has occurred, and is still occurring (much more slowly) today.

An expanding universe generally has a cosmological horizon, which, by analogy with the more familiar horizon caused by the curvature of Earth's surface, marks the boundary of the part of the Universe that an observer can see. Light (or other radiation) emitted by objects beyond the cosmological horizon in an accelerating universe never reaches the observer, because the space in between the observer and the object is expanding too rapidly.

The observable universe is one causal patch of a much larger unobservable universe other parts of the Universe cannot communicate with Earth yet. These parts of the Universe are outside our current cosmological horizon. In the standard hot big bang model, without inflation, the cosmological horizon moves out, bringing new regions into view. [14] Yet as a local observer sees such a region for the first time, it looks no different from any other region of space the local observer has already seen: its background radiation is at nearly the same temperature as the background radiation of other regions, and its space-time curvature is evolving lock-step with the others. This presents a mystery: how did these new regions know what temperature and curvature they were supposed to have? They couldn't have learned it by getting signals, because they were not previously in communication with our past light cone. [15] [16]

Inflation answers this question by postulating that all the regions come from an earlier era with a big vacuum energy, or cosmological constant. A space with a cosmological constant is qualitatively different: instead of moving outward, the cosmological horizon stays put. For any one observer, the distance to the cosmological horizon is constant. With exponentially expanding space, two nearby observers are separated very quickly so much so, that the distance between them quickly exceeds the limits of communications. The spatial slices are expanding very fast to cover huge volumes. Things are constantly moving beyond the cosmological horizon, which is a fixed distance away, and everything becomes homogeneous.

As the inflationary field slowly relaxes to the vacuum, the cosmological constant goes to zero and space begins to expand normally. The new regions that come into view during the normal expansion phase are exactly the same regions that were pushed out of the horizon during inflation, and so they are at nearly the same temperature and curvature, because they come from the same originally small patch of space.

The theory of inflation thus explains why the temperatures and curvatures of different regions are so nearly equal. It also predicts that the total curvature of a space-slice at constant global time is zero. This prediction implies that the total ordinary matter, dark matter and residual vacuum energy in the Universe have to add up to the critical density, and the evidence supports this. More strikingly, inflation allows physicists to calculate the minute differences in temperature of different regions from quantum fluctuations during the inflationary era, and many of these quantitative predictions have been confirmed. [17] [18]

Space expands

In a space that expands exponentially (or nearly exponentially) with time, any pair of free-floating objects that are initially at rest will move apart from each other at an accelerating rate, at least as long as they are not bound together by any force. From the point of view of one such object, the spacetime is something like an inside-out Schwarzschild black hole—each object is surrounded by a spherical event horizon. Once the other object has fallen through this horizon it can never return, and even light signals it sends will never reach the first object (at least so long as the space continues to expand exponentially).

In the approximation that the expansion is exactly exponential, the horizon is static and remains a fixed physical distance away. This patch of an inflating universe can be described by the following metric: [19] [20]

This exponentially expanding spacetime is called a de Sitter space, and to sustain it there must be a cosmological constant, a vacuum energy density that is constant in space and time and proportional to Λ in the above metric. For the case of exactly exponential expansion, the vacuum energy has a negative pressure p equal in magnitude to its energy density ρ the equation of state is p=−ρ .

Inflation is typically not an exactly exponential expansion, but rather quasi- or near-exponential. In such a universe the horizon will slowly grow with time as the vacuum energy density gradually decreases.

Few inhomogeneities remain

Because the accelerating expansion of space stretches out any initial variations in density or temperature to very large length scales, an essential feature of inflation is that it smooths out inhomogeneities and anisotropies, and reduces the curvature of space. This pushes the Universe into a very simple state in which it is completely dominated by the inflaton field and the only significant inhomogeneities are tiny quantum fluctuations. Inflation also dilutes exotic heavy particles, such as the magnetic monopoles predicted by many extensions to the Standard Model of particle physics. If the Universe was only hot enough to form such particles before a period of inflation, they would not be observed in nature, as they would be so rare that it is quite likely that there are none in the observable universe. Together, these effects are called the inflationary "no-hair theorem" [21] by analogy with the no hair theorem for black holes.

The "no-hair" theorem works essentially because the cosmological horizon is no different from a black-hole horizon, except for philosophical disagreements about what is on the other side. The interpretation of the no-hair theorem is that the Universe (observable and unobservable) expands by an enormous factor during inflation. In an expanding universe, energy densities generally fall, or get diluted, as the volume of the Universe increases. For example, the density of ordinary "cold" matter (dust) goes down as the inverse of the volume: when linear dimensions double, the energy density goes down by a factor of eight the radiation energy density goes down even more rapidly as the Universe expands since the wavelength of each photon is stretched (redshifted), in addition to the photons being dispersed by the expansion. When linear dimensions are doubled, the energy density in radiation falls by a factor of sixteen (see the solution of the energy density continuity equation for an ultra-relativistic fluid). During inflation, the energy density in the inflaton field is roughly constant. However, the energy density in everything else, including inhomogeneities, curvature, anisotropies, exotic particles, and standard-model particles is falling, and through sufficient inflation these all become negligible. This leaves the Universe flat and symmetric, and (apart from the homogeneous inflaton field) mostly empty, at the moment inflation ends and reheating begins. [22]

Duration

A key requirement is that inflation must continue long enough to produce the present observable universe from a single, small inflationary Hubble volume. This is necessary to ensure that the Universe appears flat, homogeneous and isotropic at the largest observable scales. This requirement is generally thought to be satisfied if the Universe expanded by a factor of at least 10 26 during inflation. [23]

Reheating

Inflation is a period of supercooled expansion, when the temperature drops by a factor of 100,000 or so. (The exact drop is model-dependent, but in the first models it was typically from 10 27 K down to 10 22 K. [24] ) This relatively low temperature is maintained during the inflationary phase. When inflation ends the temperature returns to the pre-inflationary temperature this is called reheating or thermalization because the large potential energy of the inflaton field decays into particles and fills the Universe with Standard Model particles, including electromagnetic radiation, starting the radiation dominated phase of the Universe. Because the nature of the inflation is not known, this process is still poorly understood, although it is believed to take place through a parametric resonance. [25] [26]

Inflation resolves several problems in Big Bang cosmology that were discovered in the 1970s. [27] Inflation was first proposed by Alan Guth in 1979 while investigating the problem of why no magnetic monopoles are seen today he found that a positive-energy false vacuum would, according to general relativity, generate an exponential expansion of space. It was very quickly realised that such an expansion would resolve many other long-standing problems. These problems arise from the observation that to look like it does today, the Universe would have to have started from very finely tuned, or "special" initial conditions at the Big Bang. Inflation attempts to resolve these problems by providing a dynamical mechanism that drives the Universe to this special state, thus making a universe like ours much more likely in the context of the Big Bang theory.

Horizon problem

The horizon problem is the problem of determining why the Universe appears statistically homogeneous and isotropic in accordance with the cosmological principle. [28] [29] [30] For example, molecules in a canister of gas are distributed homogeneously and isotropically because they are in thermal equilibrium: gas throughout the canister has had enough time to interact to dissipate inhomogeneities and anisotropies. The situation is quite different in the big bang model without inflation, because gravitational expansion does not give the early universe enough time to equilibrate. In a big bang with only the matter and radiation known in the Standard Model, two widely separated regions of the observable universe cannot have equilibrated because they move apart from each other faster than the speed of light and thus have never come into causal contact. In the early Universe, it was not possible to send a light signal between the two regions. Because they have had no interaction, it is difficult to explain why they have the same temperature (are thermally equilibrated). Historically, proposed solutions included the Phoenix universe of Georges Lemaître, [31] the related oscillatory universe of Richard Chase Tolman, [32] and the Mixmaster universe of Charles Misner. Lemaître and Tolman proposed that a universe undergoing a number of cycles of contraction and expansion could come into thermal equilibrium. Their models failed, however, because of the buildup of entropy over several cycles. Misner made the (ultimately incorrect) conjecture that the Mixmaster mechanism, which made the Universe more chaotic, could lead to statistical homogeneity and isotropy. [29] [33]

Flatness problem

The flatness problem is sometimes called one of the Dicke coincidences (along with the cosmological constant problem). [34] [35] It became known in the 1960s that the density of matter in the Universe was comparable to the critical density necessary for a flat universe (that is, a universe whose large scale geometry is the usual Euclidean geometry, rather than a non-Euclidean hyperbolic or spherical geometry). [36] : 61

Therefore, regardless of the shape of the universe the contribution of spatial curvature to the expansion of the Universe could not be much greater than the contribution of matter. But as the Universe expands, the curvature redshifts away more slowly than matter and radiation. Extrapolated into the past, this presents a fine-tuning problem because the contribution of curvature to the Universe must be exponentially small (sixteen orders of magnitude less than the density of radiation at Big Bang nucleosynthesis, for example). This problem is exacerbated by recent observations of the cosmic microwave background that have demonstrated that the Universe is flat to within a few percent. [37]

Magnetic-monopole problem

The magnetic monopole problem, sometimes called the exotic-relics problem, says that if the early universe were very hot, a large number of very heavy [ why? ] , stable magnetic monopoles would have been produced. This is a problem with Grand Unified Theories, which propose that at high temperatures (such as in the early universe) the electromagnetic force, strong, and weak nuclear forces are not actually fundamental forces but arise due to spontaneous symmetry breaking from a single gauge theory. [38] These theories predict a number of heavy, stable particles that have not been observed in nature. The most notorious is the magnetic monopole, a kind of stable, heavy "charge" of magnetic field. [39] [40] Monopoles are predicted to be copiously produced following Grand Unified Theories at high temperature, [41] [42] and they should have persisted to the present day, to such an extent that they would become the primary constituent of the Universe. [43] [44] Not only is that not the case, but all searches for them have failed, placing stringent limits on the density of relic magnetic monopoles in the Universe. [45] A period of inflation that occurs below the temperature where magnetic monopoles can be produced would offer a possible resolution of this problem: monopoles would be separated from each other as the Universe around them expands, potentially lowering their observed density by many orders of magnitude. Though, as cosmologist Martin Rees has written, "Skeptics about exotic physics might not be hugely impressed by a theoretical argument to explain the absence of particles that are themselves only hypothetical. Preventive medicine can readily seem 100 percent effective against a disease that doesn't exist!" [46]

Precursors

In the early days of General Relativity, Albert Einstein introduced the cosmological constant to allow a static solution, which was a three-dimensional sphere with a uniform density of matter. Later, Willem de Sitter found a highly symmetric inflating universe, which described a universe with a cosmological constant that is otherwise empty. [47] It was discovered that Einstein's universe is unstable, and that small fluctuations cause it to collapse or turn into a de Sitter universe.

In the early 1970s Zeldovich noticed the flatness and horizon problems of Big Bang cosmology before his work, cosmology was presumed to be symmetrical on purely philosophical grounds. [ citation needed ] In the Soviet Union, this and other considerations led Belinski and Khalatnikov to analyze the chaotic BKL singularity in General Relativity. Misner's Mixmaster universe attempted to use this chaotic behavior to solve the cosmological problems, with limited success.

False vacuum

In the late 1970s, Sidney Coleman applied the instanton techniques developed by Alexander Polyakov and collaborators to study the fate of the false vacuum in quantum field theory. Like a metastable phase in statistical mechanics—water below the freezing temperature or above the boiling point—a quantum field would need to nucleate a large enough bubble of the new vacuum, the new phase, in order to make a transition. Coleman found the most likely decay pathway for vacuum decay and calculated the inverse lifetime per unit volume. He eventually noted that gravitational effects would be significant, but he did not calculate these effects and did not apply the results to cosmology.

The universe could have been spontaneously created from nothing (no space, time, nor matter) by quantum fluctuations of metastable false vacuum causing an expanding bubble of true vacuum. [48]

Starobinsky inflation

In the Soviet Union, Alexei Starobinsky noted that quantum corrections to general relativity should be important for the early universe. These generically lead to curvature-squared corrections to the Einstein–Hilbert action and a form of f(R) modified gravity. The solution to Einstein's equations in the presence of curvature squared terms, when the curvatures are large, leads to an effective cosmological constant. Therefore, he proposed that the early universe went through an inflationary de Sitter era. [49] This resolved the cosmology problems and led to specific predictions for the corrections to the microwave background radiation, corrections that were then calculated in detail. Starobinsky used the action

which corresponds to the potential

in the Einstein frame. This results in the observables: n s = 1 − 2 N , r = 12 N 2 . =1->,quad quad r=>>.> [50]

Monopole problem

In 1978, Zeldovich noted the monopole problem, which was an unambiguous quantitative version of the horizon problem, this time in a subfield of particle physics, which led to several speculative attempts to resolve it. In 1980 Alan Guth realized that false vacuum decay in the early universe would solve the problem, leading him to propose a scalar-driven inflation. Starobinsky's and Guth's scenarios both predicted an initial de Sitter phase, differing only in mechanistic details.

Early inflationary models

Guth proposed inflation in January 1981 to explain the nonexistence of magnetic monopoles [51] [52] it was Guth who coined the term "inflation". [53] At the same time, Starobinsky argued that quantum corrections to gravity would replace the initial singularity of the Universe with an exponentially expanding de Sitter phase. [54] In October 1980, Demosthenes Kazanas suggested that exponential expansion could eliminate the particle horizon and perhaps solve the horizon problem, [55] [56] while Sato suggested that an exponential expansion could eliminate domain walls (another kind of exotic relic). [57] In 1981 Einhorn and Sato [58] published a model similar to Guth's and showed that it would resolve the puzzle of the magnetic monopole abundance in Grand Unified Theories. Like Guth, they concluded that such a model not only required fine tuning of the cosmological constant, but also would likely lead to a much too granular universe, i.e., to large density variations resulting from bubble wall collisions.

Guth proposed that as the early universe cooled, it was trapped in a false vacuum with a high energy density, which is much like a cosmological constant. As the very early universe cooled it was trapped in a metastable state (it was supercooled), which it could only decay out of through the process of bubble nucleation via quantum tunneling. Bubbles of true vacuum spontaneously form in the sea of false vacuum and rapidly begin expanding at the speed of light. Guth recognized that this model was problematic because the model did not reheat properly: when the bubbles nucleated, they did not generate any radiation. Radiation could only be generated in collisions between bubble walls. But if inflation lasted long enough to solve the initial conditions problems, collisions between bubbles became exceedingly rare. In any one causal patch it is likely that only one bubble would nucleate.

. Kazanas (1980) called this phase of the early Universe "de Sitter's phase." The name "inflation" was given by Guth (1981). . Guth himself did not refer to work of Kazanas until he published a book on the subject under the title "The inflationary universe: the quest for a new theory of cosmic origin" (1997), where he apologizes for not having referenced the work of Kazanas and of others, related to inflation. [59]

Slow-roll inflation

The bubble collision problem was solved by Linde [60] and independently by Andreas Albrecht and Paul Steinhardt [61] in a model named new inflation or slow-roll inflation (Guth's model then became known as old inflation). In this model, instead of tunneling out of a false vacuum state, inflation occurred by a scalar field rolling down a potential energy hill. When the field rolls very slowly compared to the expansion of the Universe, inflation occurs. However, when the hill becomes steeper, inflation ends and reheating can occur.

Effects of asymmetries

Eventually, it was shown that new inflation does not produce a perfectly symmetric universe, but that quantum fluctuations in the inflaton are created. These fluctuations form the primordial seeds for all structure created in the later universe. [62] These fluctuations were first calculated by Viatcheslav Mukhanov and G. V. Chibisov in analyzing Starobinsky's similar model. [63] [64] [65] In the context of inflation, they were worked out independently of the work of Mukhanov and Chibisov at the three-week 1982 Nuffield Workshop on the Very Early Universe at Cambridge University. [66] The fluctuations were calculated by four groups working separately over the course of the workshop: Stephen Hawking [67] Starobinsky [68] Guth and So-Young Pi [69] and Bardeen, Steinhardt and Turner. [70]

Inflation is a mechanism for realizing the cosmological principle, which is the basis of the standard model of physical cosmology: it accounts for the homogeneity and isotropy of the observable universe. In addition, it accounts for the observed flatness and absence of magnetic monopoles. Since Guth's early work, each of these observations has received further confirmation, most impressively by the detailed observations of the cosmic microwave background made by the Planck spacecraft. [71] This analysis shows that the Universe is flat to within 0.5 percent, and that it is homogeneous and isotropic to one part in 100,000.

Inflation predicts that the structures visible in the Universe today formed through the gravitational collapse of perturbations that were formed as quantum mechanical fluctuations in the inflationary epoch. The detailed form of the spectrum of perturbations, called a nearly-scale-invariant Gaussian random field is very specific and has only two free parameters. One is the amplitude of the spectrum and the spectral index, which measures the slight deviation from scale invariance predicted by inflation (perfect scale invariance corresponds to the idealized de Sitter universe). [72] The other free parameter is the tensor to scalar ratio. The simplest inflation models, those without fine-tuning, predict a tensor to scalar ratio near 0.1. [73]

Inflation predicts that the observed perturbations should be in thermal equilibrium with each other (these are called adiabatic or isentropic perturbations). This structure for the perturbations has been confirmed by the Planck spacecraft, WMAP spacecraft and other cosmic microwave background (CMB) experiments, and galaxy surveys, especially the ongoing Sloan Digital Sky Survey. [74] These experiments have shown that the one part in 100,000 inhomogeneities observed have exactly the form predicted by theory. There is evidence for a slight deviation from scale invariance. The spectral index, ns is one for a scale-invariant Harrison–Zel'dovich spectrum. The simplest inflation models predict that ns is between 0.92 and 0.98. [75] [73] [76] [77] This is the range that is possible without fine-tuning of the parameters related to energy. [76] From Planck data it can be inferred that ns=0.968 ± 0.006, [71] [78] and a tensor to scalar ratio that is less than 0.11. These are considered an important confirmation of the theory of inflation. [17]

Various inflation theories have been proposed that make radically different predictions, but they generally have much more fine tuning than should be necessary. [75] [73] As a physical model, however, inflation is most valuable in that it robustly predicts the initial conditions of the Universe based on only two adjustable parameters: the spectral index (that can only change in a small range) and the amplitude of the perturbations. Except in contrived models, this is true regardless of how inflation is realized in particle physics.

Occasionally, effects are observed that appear to contradict the simplest models of inflation. The first-year WMAP data suggested that the spectrum might not be nearly scale-invariant, but might instead have a slight curvature. [79] However, the third-year data revealed that the effect was a statistical anomaly. [17] Another effect remarked upon since the first cosmic microwave background satellite, the Cosmic Background Explorer is that the amplitude of the quadrupole moment of the CMB is unexpectedly low and the other low multipoles appear to be preferentially aligned with the ecliptic plane. Some have claimed that this is a signature of non-Gaussianity and thus contradicts the simplest models of inflation. Others have suggested that the effect may be due to other new physics, foreground contamination, or even publication bias. [80]

An experimental program is underway to further test inflation with more precise CMB measurements. In particular, high precision measurements of the so-called "B-modes" of the polarization of the background radiation could provide evidence of the gravitational radiation produced by inflation, and could also show whether the energy scale of inflation predicted by the simplest models (10 15 –10 16 GeV) is correct. [73] [76] In March 2014, the BICEP2 team announced B-mode CMB polarization confirming inflation had been demonstrated. The team announced the tensor-to-scalar power ratio r was between 0.15 and 0.27 (rejecting the null hypothesis r is expected to be 0 in the absence of inflation). [81] However, on 19 June 2014, lowered confidence in confirming the findings was reported [82] [83] [84] on 19 September 2014, a further reduction in confidence was reported [85] [86] and, on 30 January 2015, even less confidence yet was reported. [87] [88] By 2018, additional data suggested, with 95% confidence, that r is 0.06 or lower: consistent with the null hypothesis, but still also consistent with many remaining models of inflation. [81]

Other potentially corroborating measurements are expected from the Planck spacecraft, although it is unclear if the signal will be visible, or if contamination from foreground sources will interfere. [89] Other forthcoming measurements, such as those of 21 centimeter radiation (radiation emitted and absorbed from neutral hydrogen before the first stars formed), may measure the power spectrum with even greater resolution than the CMB and galaxy surveys, although it is not known if these measurements will be possible or if interference with radio sources on Earth and in the galaxy will be too great. [90]

Is the theory of cosmological inflation correct, and if so, what are the details of this epoch? What is the hypothetical inflaton field giving rise to inflation?

In Guth's early proposal, it was thought that the inflaton was the Higgs field, the field that explains the mass of the elementary particles. [52] It is now believed by some that the inflaton cannot be the Higgs field [91] although the recent discovery of the Higgs boson has increased the number of works considering the Higgs field as inflaton. [92] One problem of this identification is the current tension with experimental data at the electroweak scale, [93] which is currently under study at the Large Hadron Collider (LHC). Other models of inflation relied on the properties of Grand Unified Theories. [61] Since the simplest models of grand unification have failed, it is now thought by many physicists that inflation will be included in a supersymmetric theory such as string theory or a supersymmetric grand unified theory. At present, while inflation is understood principally by its detailed predictions of the initial conditions for the hot early universe, the particle physics is largely ad hoc modelling. As such, although predictions of inflation have been consistent with the results of observational tests, many open questions remain.

Fine-tuning problem

One of the most severe challenges for inflation arises from the need for fine tuning. In new inflation, the slow-roll conditions must be satisfied for inflation to occur. The slow-roll conditions say that the inflaton potential must be flat (compared to the large vacuum energy) and that the inflaton particles must have a small mass. [ clarification needed ] [94] New inflation requires the Universe to have a scalar field with an especially flat potential and special initial conditions. However, explanations for these fine-tunings have been proposed. For example, classically scale invariant field theories, where scale invariance is broken by quantum effects, provide an explanation of the flatness of inflationary potentials, as long as the theory can be studied through perturbation theory. [95]

Linde proposed a theory known as chaotic inflation in which he suggested that the conditions for inflation were actually satisfied quite generically. Inflation will occur in virtually any universe that begins in a chaotic, high energy state that has a scalar field with unbounded potential energy. [96] However, in his model the inflaton field necessarily takes values larger than one Planck unit: for this reason, these are often called large field models and the competing new inflation models are called small field models. In this situation, the predictions of effective field theory are thought to be invalid, as renormalization should cause large corrections that could prevent inflation. [97] This problem has not yet been resolved and some cosmologists argue that the small field models, in which inflation can occur at a much lower energy scale, are better models. [98] While inflation depends on quantum field theory (and the semiclassical approximation to quantum gravity) in an important way, it has not been completely reconciled with these theories.

Brandenberger commented on fine-tuning in another situation. [99] The amplitude of the primordial inhomogeneities produced in inflation is directly tied to the energy scale of inflation. This scale is suggested to be around 10 16 GeV or 10 −3 times the Planck energy. The natural scale is naïvely the Planck scale so this small value could be seen as another form of fine-tuning (called a hierarchy problem): the energy density given by the scalar potential is down by 10 −12 compared to the Planck density. This is not usually considered to be a critical problem, however, because the scale of inflation corresponds naturally to the scale of gauge unification.

Eternal inflation

In many models, the inflationary phase of the Universe's expansion lasts forever in at least some regions of the Universe. This occurs because inflating regions expand very rapidly, reproducing themselves. Unless the rate of decay to the non-inflating phase is sufficiently fast, new inflating regions are produced more rapidly than non-inflating regions. In such models, most of the volume of the Universe is continuously inflating at any given time.

All models of eternal inflation produce an infinite, hypothetical multiverse, typically a fractal. The multiverse theory has created significant dissension in the scientific community about the viability of the inflationary model.

Paul Steinhardt, one of the original architects of the inflationary model, introduced the first example of eternal inflation in 1983. [100] He showed that the inflation could proceed forever by producing bubbles of non-inflating space filled with hot matter and radiation surrounded by empty space that continues to inflate. The bubbles could not grow fast enough to keep up with the inflation. Later that same year, Alexander Vilenkin showed that eternal inflation is generic. [101]

Although new inflation is classically rolling down the potential, quantum fluctuations can sometimes lift it to previous levels. These regions in which the inflaton fluctuates upwards expand much faster than regions in which the inflaton has a lower potential energy, and tend to dominate in terms of physical volume. It has been shown that any inflationary theory with an unbounded potential is eternal. There are well-known theorems that this steady state cannot continue forever into the past. Inflationary spacetime, which is similar to de Sitter space, is incomplete without a contracting region. However, unlike de Sitter space, fluctuations in a contracting inflationary space collapse to form a gravitational singularity, a point where densities become infinite. Therefore, it is necessary to have a theory for the Universe's initial conditions.

In eternal inflation, regions with inflation have an exponentially growing volume, while regions that are not inflating don't. This suggests that the volume of the inflating part of the Universe in the global picture is always unimaginably larger than the part that has stopped inflating, even though inflation eventually ends as seen by any single pre-inflationary observer. Scientists disagree about how to assign a probability distribution to this hypothetical anthropic landscape. If the probability of different regions is counted by volume, one should expect that inflation will never end or applying boundary conditions that a local observer exists to observe it, that inflation will end as late as possible.

Some physicists believe this paradox can be resolved by weighting observers by their pre-inflationary volume. Others believe that there is no resolution to the paradox and that the multiverse is a critical flaw in the inflationary paradigm. Paul Steinhardt, who first introduced the eternal inflationary model, [100] later became one of its most vocal critics for this reason. [102] [103] [104]

Initial conditions

Some physicists have tried to avoid the initial conditions problem by proposing models for an eternally inflating universe with no origin. [105] [106] [107] These models propose that while the Universe, on the largest scales, expands exponentially it was, is and always will be, spatially infinite and has existed, and will exist, forever.

Other proposals attempt to describe the ex nihilo creation of the Universe based on quantum cosmology and the following inflation. Vilenkin put forth one such scenario. [101] Hartle and Hawking offered the no-boundary proposal for the initial creation of the Universe in which inflation comes about naturally. [108] [109] [110]

Guth described the inflationary universe as the "ultimate free lunch": [111] [112] new universes, similar to our own, are continually produced in a vast inflating background. Gravitational interactions, in this case, circumvent (but do not violate) the first law of thermodynamics (energy conservation) and the second law of thermodynamics (entropy and the arrow of time problem). However, while there is consensus that this solves the initial conditions problem, some have disputed this, as it is much more likely that the Universe came about by a quantum fluctuation. Don Page was an outspoken critic of inflation because of this anomaly. [113] He stressed that the thermodynamic arrow of time necessitates low entropy initial conditions, which would be highly unlikely. According to them, rather than solving this problem, the inflation theory aggravates it – the reheating at the end of the inflation era increases entropy, making it necessary for the initial state of the Universe to be even more orderly than in other Big Bang theories with no inflation phase.

Hawking and Page later found ambiguous results when they attempted to compute the probability of inflation in the Hartle-Hawking initial state. [114] Other authors have argued that, since inflation is eternal, the probability doesn't matter as long as it is not precisely zero: once it starts, inflation perpetuates itself and quickly dominates the Universe. [5] [115] : 223–225 However, Albrecht and Lorenzo Sorbo argued that the probability of an inflationary cosmos, consistent with today's observations, emerging by a random fluctuation from some pre-existent state is much higher than that of a non-inflationary cosmos. This is because the "seed" amount of non-gravitational energy required for the inflationary cosmos is so much less than that for a non-inflationary alternative, which outweighs any entropic considerations. [116]

Another problem that has occasionally been mentioned is the trans-Planckian problem or trans-Planckian effects. [117] Since the energy scale of inflation and the Planck scale are relatively close, some of the quantum fluctuations that have made up the structure in our universe were smaller than the Planck length before inflation. Therefore, there ought to be corrections from Planck-scale physics, in particular the unknown quantum theory of gravity. Some disagreement remains about the magnitude of this effect: about whether it is just on the threshold of detectability or completely undetectable. [118]

Hybrid inflation

Another kind of inflation, called hybrid inflation, is an extension of new inflation. It introduces additional scalar fields, so that while one of the scalar fields is responsible for normal slow roll inflation, another triggers the end of inflation: when inflation has continued for sufficiently long, it becomes favorable to the second field to decay into a much lower energy state. [119]

In hybrid inflation, one scalar field is responsible for most of the energy density (thus determining the rate of expansion), while another is responsible for the slow roll (thus determining the period of inflation and its termination). Thus fluctuations in the former inflaton would not affect inflation termination, while fluctuations in the latter would not affect the rate of expansion. Therefore, hybrid inflation is not eternal. [120] [121] When the second (slow-rolling) inflaton reaches the bottom of its potential, it changes the location of the minimum of the first inflaton's potential, which leads to a fast roll of the inflaton down its potential, leading to termination of inflation.

Relation to dark energy

Dark energy is broadly similar to inflation and is thought to be causing the expansion of the present-day universe to accelerate. However, the energy scale of dark energy is much lower, 10 −12 GeV, roughly 27 orders of magnitude less than the scale of inflation.

Inflation and string cosmology

The discovery of flux compactifications opened the way for reconciling inflation and string theory. [122] Brane inflation suggests that inflation arises from the motion of D-branes [123] in the compactified geometry, usually towards a stack of anti-D-branes. This theory, governed by the Dirac-Born-Infeld action, is different from ordinary inflation. The dynamics are not completely understood. It appears that special conditions are necessary since inflation occurs in tunneling between two vacua in the string landscape. The process of tunneling between two vacua is a form of old inflation, but new inflation must then occur by some other mechanism.

Inflation and loop quantum gravity

When investigating the effects the theory of loop quantum gravity would have on cosmology, a loop quantum cosmology model has evolved that provides a possible mechanism for cosmological inflation. Loop quantum gravity assumes a quantized spacetime. If the energy density is larger than can be held by the quantized spacetime, it is thought to bounce back. [124]

Other models have been advanced that are claimed to explain some or all of the observations addressed by inflation.

Big bounce

The big bounce hypothesis attempts to replace the cosmic singularity with a cosmic contraction and bounce, thereby explaining the initial conditions that led to the big bang. [125] The flatness and horizon problems are naturally solved in the Einstein-Cartan-Sciama-Kibble theory of gravity, without needing an exotic form of matter or free parameters. [126] [127] This theory extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamical variable. The minimal coupling between torsion and Dirac spinors generates a spin-spin interaction that is significant in fermionic matter at extremely high densities. Such an interaction averts the unphysical Big Bang singularity, replacing it with a cusp-like bounce at a finite minimum scale factor, before which the Universe was contracting. The rapid expansion immediately after the Big Bounce explains why the present Universe at largest scales appears spatially flat, homogeneous and isotropic. As the density of the Universe decreases, the effects of torsion weaken and the Universe smoothly enters the radiation-dominated era.

Ekpyrotic and cyclic models

The ekpyrotic and cyclic models are also considered adjuncts to inflation. These models solve the horizon problem through an expanding epoch well before the Big Bang, and then generate the required spectrum of primordial density perturbations during a contracting phase leading to a Big Crunch. The Universe passes through the Big Crunch and emerges in a hot Big Bang phase. In this sense they are reminiscent of Richard Chace Tolman's oscillatory universe in Tolman's model, however, the total age of the Universe is necessarily finite, while in these models this is not necessarily so. Whether the correct spectrum of density fluctuations can be produced, and whether the Universe can successfully navigate the Big Bang/Big Crunch transition, remains a topic of controversy and current research. Ekpyrotic models avoid the magnetic monopole problem as long as the temperature at the Big Crunch/Big Bang transition remains below the Grand Unified Scale, as this is the temperature required to produce magnetic monopoles in the first place. As things stand, there is no evidence of any 'slowing down' of the expansion, but this is not surprising as each cycle is expected to last on the order of a trillion years.

String gas cosmology

String theory requires that, in addition to the three observable spatial dimensions, additional dimensions exist that are curled up or compactified (see also Kaluza–Klein theory). Extra dimensions appear as a frequent component of supergravity models and other approaches to quantum gravity. This raised the contingent question of why four space-time dimensions became large and the rest became unobservably small. An attempt to address this question, called string gas cosmology, was proposed by Robert Brandenberger and Cumrun Vafa. [128] This model focuses on the dynamics of the early universe considered as a hot gas of strings. Brandenberger and Vafa show that a dimension of spacetime can only expand if the strings that wind around it can efficiently annihilate each other. Each string is a one-dimensional object, and the largest number of dimensions in which two strings will generically intersect (and, presumably, annihilate) is three. Therefore, the most likely number of non-compact (large) spatial dimensions is three. Current work on this model centers on whether it can succeed in stabilizing the size of the compactified dimensions and produce the correct spectrum of primordial density perturbations. [129] The original model did not "solve the entropy and flatness problems of standard cosmology", [130] although Brandenburger and coauthors later argued that these problems can be eliminated by implementing string gas cosmology in the context of a bouncing-universe scenario. [131] [132]

Varying c

Cosmological models employing a variable speed of light have been proposed to resolve the horizon problem of and provide an alternative to cosmic inflation. In the VSL models, the fundamental constant c, denoting the speed of light in vacuum, is greater in the early universe than its present value, effectively increasing the particle horizon at the time of decoupling sufficiently to account for the observed isotropy of the CMB.

Since its introduction by Alan Guth in 1980, the inflationary paradigm has become widely accepted. Nevertheless, many physicists, mathematicians, and philosophers of science have voiced criticisms, claiming untestable predictions and a lack of serious empirical support. [5] In 1999, John Earman and Jesús Mosterín published a thorough critical review of inflationary cosmology, concluding, "we do not think that there are, as yet, good grounds for admitting any of the models of inflation into the standard core of cosmology." [6]

In order to work, and as pointed out by Roger Penrose from 1986 on, inflation requires extremely specific initial conditions of its own, so that the problem (or pseudo-problem) of initial conditions is not solved: "There is something fundamentally misconceived about trying to explain the uniformity of the early universe as resulting from a thermalization process. [. ] For, if the thermalization is actually doing anything [. ] then it represents a definite increasing of the entropy. Thus, the universe would have been even more special before the thermalization than after." [133] The problem of specific or "fine-tuned" initial conditions would not have been solved it would have gotten worse. At a conference in 2015, Penrose said that "inflation isn't falsifiable, it's falsified. [. ] BICEP did a wonderful service by bringing all the Inflation-ists out of their shell, and giving them a black eye." [7]

A recurrent criticism of inflation is that the invoked inflaton field does not correspond to any known physical field, and that its potential energy curve seems to be an ad hoc contrivance to accommodate almost any data obtainable. Paul Steinhardt, one of the founding fathers of inflationary cosmology, has recently become one of its sharpest critics. He calls 'bad inflation' a period of accelerated expansion whose outcome conflicts with observations, and 'good inflation' one compatible with them: "Not only is bad inflation more likely than good inflation, but no inflation is more likely than either [. ] Roger Penrose considered all the possible configurations of the inflaton and gravitational fields. Some of these configurations lead to inflation [. ] Other configurations lead to a uniform, flat universe directly – without inflation. Obtaining a flat universe is unlikely overall. Penrose's shocking conclusion, though, was that obtaining a flat universe without inflation is much more likely than with inflation – by a factor of 10 to the googol (10 to the 100) power!" [5] [115] Together with Anna Ijjas and Abraham Loeb, he wrote articles claiming that the inflationary paradigm is in trouble in view of the data from the Planck satellite. [134] [135] Counter-arguments were presented by Alan Guth, David Kaiser, and Yasunori Nomura [136] and by Andrei Linde, [137] saying that "cosmic inflation is on a stronger footing than ever before". [136]


How can an expanding Universe look homogeneous?

I think the evolution of the universe is taken into account when reaching that conclusion.

The universe looks the same no matter which direction we look in. Looking further out, the universe looks different - but it looks different in the same way no matter which direction we look in.

I may be missing your point, though.

However, that was not your original question. Your original question was whether it was inconsistent to have inhomogeneity on the past lightcone and homogeneity on spatial slices (it isn't).

However, that was not your original question. Your original question was whether it was inconsistent to have inhomogeneity on the past lightcone and homogeneity on spatial slices (it isn't).

Both statements are correct, although "inhomogeneity" is a somewhat misleading way of describing observation B). What we observe as we go further and further along our past light cone is better described as "times further in the past" rather than "increasing distances", and the fact that the observed density increases as we go further along our past light cone just means the universe was denser in the past.

With that caveat, observation B) is obvious (you stated it yourself in the OP). A is not quite so obvious, but it follows from the observed isotropy, which is part of observation B) (we see the same behavior of density in all directions). If the universe were not homogeneous at a given "instant" of cosmological time (more technically, on a given spacelike slice of constant cosmological time), we would not expect to see isotropic behavior of the density when we look back along our past light cone.

Please provide a reference.
Homogeneity is an assumption, albeit a reasonable one. You cannot show that the Universe is homogeneous by direct observation. (You could infer that what you see is going to evolve essentially to what you see locally at the present by observing your past lightcone and applying your knowledge of GR and how matter behaves).

Yes, I agree it's not a certain conclusion. But I think it is a plausible inference.

Quoting one particle physicist,

"Based on observations and the second law of thermodynamics, we know that at the beginning of the universe, a quantity called entropy was extremely low, and has been increasing ever since. Entropy can be thought of as a measure of disorder or randomness. A low-entropy beginning to our universe means that the big bang origin was not a chaotic event but was highly ordered"

If the tire universe began from a singular highly ordered, ultra low entropy state. Then everything everywhere began in a very similar initial condition. And if everything everywhere started in extremely similar states. Wouldn't we expect everything to evolve towards similar states?

Is it possible that the uniformity homogeneity and isotropy of the cosmic microwave background radiation? merely reflects the initial. Low entropy state?

I think this was answered here -

So I understand that going forward in time towards a hypothetical big crunch is not analogous to going backwards in time towards The Big Bang.


Going forward to in time towards a big crunch, densities increase, energy densities increase, temperatures increase and entropy also increases. Like the compression stroke of a car engine.

Whereas going backwards in time towards The Big Bang. Although matter and energy densities and temperatures increase, entropy decreases.

What does that even mean? How can you have high temperature low entropy?

The only thing I can think of is some kind of Fermi Fluid at zero Kelvin. Which even though it has no temperature? Still, by the exclusion principle. The Fermi "surface" of the Fermi "sea" Maintains occupied states have very high energy??

What exactly is ultra high temperature yet ultra low entropy??

Because the universe gets more and more uniform--less and less clumped--as you go back towards the Big Bang. The decrease in entropy due to that extreme uniformity more than compensates for the increase in entropy due to the higher density and temperature.

If the universe were to collapse into a Big Crunch, it would not be the same the universe would still be getting more clumped gravitationally as it collapsed.

@phinds @PeterDonis Thanks, both. My intuitions for this are clearly way out of whack.

I do recall from a different thread Peter explaining to me that in the presence of gravity clumping is a high entropy state because there are many more likely clumped states than un-clumped states, which made perfect sense to me at the time (and still does).

Because the universe gets more and more uniform--less and less clumped--as you go back towards the Big Bang. The decrease in entropy due to that extreme uniformity more than compensates for the increase in entropy due to the higher density and temperature.

If the universe were to collapse into a Big Crunch, it would not be the same the universe would still be getting more clumped gravitationally as it collapsed.

so, you're discussing the collapses of stars??

as stars gravitationally collapse, their interiors heat up to millions & billions of degrees (T), even as they lose heat?

whereas the surrounding environment is much cooler (t), so when it absorbs all that heat, its entropy increase is greater?

dS_net = dS + dS* = dQ(1/t -1/T) > 0

I am not seeing how dS = dQ/T can describe a system where entropy is strongly influenced by gravity.

If I have my B level concepts straight, considering a system containing a neutron star and nothing else, with a relatively large boundary (say one light year) that system has a very high entropy because it is in a very probable state given that gravity is clumping all the particles together as one would expect. If there were no gravity in this one-light year diameter system, then the entropy would be, I think, very low, because only considering thermal interactions and no gravity it is very unlikely (probably no mechanism to even have such a configuration) that all the particles would end up grouped together as tightly as the particles of a neutron star.

So trying to make sense of dS = dQ/T in a system where entropy is very influenced by gravity may not work - it seems like some needed modelling is missing. I tried a couple Google searches but can't find anything that looks like what I am picturing.


Did cosmic inflation happen everywhere in the Universe?

General Relativity equations tells us that the earliest time of the universe which our physics can tell us had infinite space and infinite density (i.e. matter).

Then space started expanding, thus increasing the distance of any 2 points of that infinite dense matter, thus making it less dense and eventually creating galaxies and stars.

So, the big bang which is really a big expansion happened everywhere in the universe.

So, then, cosmic inflation suggested by Alan Guth tries to give an explanation as to what caused the big bang (expansion).

So, my question is, did inflation happen everywhere in the universe, or did it happen only in the region (infinitely small region) which led to our observable universe?

So, my understanding is that at the earliest time our equations tells us there was infinite space and infinite matter density. Then in some places in that infinite space inflation happened creating bubble universes like ours, but in other areas the universe just expanded. So, somewhere in the infinite universe even now inflation can happen creating more bubble universes. This is called eternal inflation.

Is this the current model of the big bang with inflation added?

Then what is there outside the bubble universes?? More galaxies, or more of that infinite dense matter which did not go through inflation to create a bubble universe?


Contents

In 1912, Vesto Slipher discovered that light from remote galaxies was redshifted, [3] [4] which was later interpreted as galaxies receding from the Earth. In 1922, Alexander Friedmann used Einstein field equations to provide theoretical evidence that the universe is expanding. [5] In 1927, Georges Lemaître independently reached a similar conclusion to Friedmann on a theoretical basis, and also presented the first observational evidence for a linear relationship between distance to galaxies and their recessional velocity. [6] Edwin Hubble observationally confirmed Lemaître's findings two years later. [7] Assuming the cosmological principle, these findings would imply that all galaxies are moving away from each other.

Based on large quantities of experimental observation and theoretical work, the scientific consensus is that space itself is expanding, and that it expanded very rapidly within the first fraction of a second after the Big Bang. This kind of expansion is known as "metric expansion". In mathematics and physics, a "metric" means a measure of distance, and the term implies that the sense of distance within the universe is itself changing.

The modern explanation for the metric expansion of space was proposed by physicist Alan Guth in 1979 while investigating the problem of why no magnetic monopoles are seen today. Guth found in his investigation that if the universe contained a field that has a positive-energy false vacuum state, then according to general relativity it would generate an exponential expansion of space. It was very quickly realized that such an expansion would resolve many other long-standing problems. These problems arise from the observation that to look as it does today, the universe would have to have started from very finely tuned, or "special" initial conditions at the Big Bang. Inflation theory largely resolves these problems as well, thus making a universe like ours much more likely in the context of Big Bang theory. According to Roger Penrose, inflation does not solve the main problem it was supposed to solve, namely the incredibly low entropy (with unlikeliness of the state on the order of 1/10 10 128 ⁠) of the early Universe contained in the gravitational conformal degrees of freedom (in contrast to fields degrees of freedom, such like the cosmic microwave background whose smoothness can be explained by inflation). Thus, he puts forward his scenario of the evolution of the Universe: conformal cyclic cosmology. [8]

No field responsible for cosmic inflation has been discovered. However such a field, if found in the future, would be scalar. The first similar scalar field proven to exist was only discovered in 2012–2013 and is still being researched. So it is not seen as problematic that a field responsible for cosmic inflation and the metric expansion of space has not yet been discovered [ citation needed ] .

The proposed field and its quanta (the subatomic particles related to it) have been named inflaton. If this field did not exist, scientists would have to propose a different explanation for all the observations that strongly suggest a metric expansion of space has occurred, and is still occurring much more slowly today.

To understand the metric expansion of the universe, it is helpful to discuss briefly what a metric is, and how metric expansion works.

A metric defines the concept of distance, by stating in mathematical terms how distances between two nearby points in space are measured, in terms of the coordinate system. Coordinate systems locate points in a space (of whatever number of dimensions) by assigning unique positions on a grid, known as coordinates, to each point. Latitude and longitude, and x-y graphs are common examples of coordinates. A metric is a formula which describes how a number known as "distance" is to be measured between two points.

It may seem obvious that distance is measured by a straight line, but in many cases it is not. For example, long haul aircraft travel along a curve known as a "great circle" and not a straight line, because that is a better metric for air travel. (A straight line would go through the earth). Another example is planning a car journey, where one might want the shortest journey in terms of travel time - in that case a straight line is a poor choice of metric because the shortest distance by road is not normally a straight line, and even the path nearest to a straight line will not necessarily be the quickest. A final example is the internet, where even for nearby towns, the quickest route for data can be via major connections that go across the country and back again. In this case the metric used will be the shortest time that data takes to travel between two points on the network.

In cosmology, we cannot use a ruler to measure metric expansion, because our ruler's internal forces easily overcome the extremely slow expansion of space, leaving the ruler intact. Also, any objects on or near earth that we might measure are being held together or pushed apart by several forces which are far larger in their effects. So even if we could measure the tiny expansion that is still happening, we would not notice the change on a small scale or in everyday life. On a large intergalactic scale, we can use other tests of distance and these do show that space is expanding, even if a ruler on earth could not measure it.

The metric expansion of space is described using the mathematics of metric tensors. The coordinate system we use is called "comoving coordinates", a type of coordinate system which takes account of time as well as space and the speed of light, and allows us to incorporate the effects of both general and special relativity.

Example: "Great Circle" metric for Earth's surface Edit

For example, consider the measurement of distance between two places on the surface of the Earth. This is a simple, familiar example of spherical geometry. Because the surface of the Earth is two-dimensional, points on the surface of the Earth can be specified by two coordinates – for example, the latitude and longitude. Specification of a metric requires that one first specify the coordinates used. In our simple example of the surface of the Earth, we could choose any kind of coordinate system we wish, for example latitude and longitude, or X-Y-Z Cartesian coordinates. Once we have chosen a specific coordinate system, the numerical values of the coordinates of any two points are uniquely determined, and based upon the properties of the space being discussed, the appropriate metric is mathematically established too. On the curved surface of the Earth, we can see this effect in long-haul airline flights where the distance between two points is measured based upon a great circle, rather than the straight line one might plot on a two-dimensional map of the Earth's surface. In general, such shortest-distance paths are called "geodesics". In Euclidean geometry, the geodesic is a straight line, while in non-Euclidean geometry such as on the Earth's surface, this is not the case. Indeed, even the shortest-distance great circle path is always longer than the Euclidean straight line path which passes through the interior of the Earth. The difference between the straight line path and the shortest-distance great circle path is due to the curvature of the Earth's surface. While there is always an effect due to this curvature, at short distances the effect is small enough to be unnoticeable.

On plane maps, great circles of the Earth are mostly not shown as straight lines. Indeed, there is a seldom-used map projection, namely the gnomonic projection, where all great circles are shown as straight lines, but in this projection, the distance scale varies very much in different areas. There is no map projection in which the distance between any two points on Earth, measured along the great circle geodesics, is directly proportional to their distance on the map such accuracy is possible only with a globe.

Metric tensors Edit

In differential geometry, the backbone mathematics for general relativity, a metric tensor can be defined which precisely characterizes the space being described by explaining the way distances should be measured in every possible direction. General relativity necessarily invokes a metric in four dimensions (one of time, three of space) because, in general, different reference frames will experience different intervals of time and space depending on the inertial frame. This means that the metric tensor in general relativity relates precisely how two events in spacetime are separated. A metric expansion occurs when the metric tensor changes with time (and, specifically, whenever the spatial part of the metric gets larger as time goes forward). This kind of expansion is different from all kinds of expansions and explosions commonly seen in nature in no small part because times and distances are not the same in all reference frames, but are instead subject to change. A useful visualization is to approach the subject rather than objects in a fixed "space" moving apart into "emptiness", as space itself growing between objects without any acceleration of the objects themselves. The space between objects shrinks or grows as the various geodesics converge or diverge.

Because this expansion is caused by relative changes in the distance-defining metric, this expansion (and the resultant movement apart of objects) is not restricted by the speed of light upper bound of special relativity. Two reference frames that are globally separated can be moving apart faster than light without violating special relativity, although whenever two reference frames diverge from each other faster than the speed of light, there will be observable effects associated with such situations including the existence of various cosmological horizons.

Theory and observations suggest that very early in the history of the universe, there was an inflationary phase where the metric changed very rapidly, and that the remaining time-dependence of this metric is what we observe as the so-called Hubble expansion, the moving apart of all gravitationally unbound objects in the universe. The expanding universe is therefore a fundamental feature of the universe we inhabit – a universe fundamentally different from the static universe Albert Einstein first considered when he developed his gravitational theory.

Comoving coordinates Edit

In expanding space, proper distances are dynamical quantities which change with time. An easy way to correct for this is to use comoving coordinates which remove this feature and allow for a characterization of different locations in the universe without having to characterize the physics associated with metric expansion. In comoving coordinates, the distances between all objects are fixed and the instantaneous dynamics of matter and light are determined by the normal physics of gravity and electromagnetic radiation. Any time-evolution however must be accounted for by taking into account the Hubble law expansion in the appropriate equations in addition to any other effects that may be operating (gravity, dark energy, or curvature, for example). Cosmological simulations that run through significant fractions of the universe's history therefore must include such effects in order to make applicable predictions for observational cosmology.

Measurement of expansion and change of rate of expansion Edit

In principle, the expansion of the universe could be measured by taking a standard ruler and measuring the distance between two cosmologically distant points, waiting a certain time, and then measuring the distance again, but in practice, standard rulers are not easy to find on cosmological scales and the timescales over which a measurable expansion would be visible are too great to be observable even by multiple generations of humans. The expansion of space is measured indirectly. The theory of relativity predicts phenomena associated with the expansion, notably the redshift-versus-distance relationship known as Hubble's Law functional forms for cosmological distance measurements that differ from what would be expected if space were not expanding and an observable change in the matter and energy density of the universe seen at different lookback times.

The first measurement of the expansion of space came with Hubble's realization of the velocity vs. redshift relation. Most recently, by comparing the apparent brightness of distant standard candles to the redshift of their host galaxies, the expansion rate of the universe has been measured to be H0 = 73.24 ± 1.74 (km/s)/Mpc . [9] This means that for every million parsecs of distance from the observer, the light received from that distance is cosmologically redshifted by about 73 kilometres per second (160,000 mph). On the other hand, by assuming a cosmological model, e.g. Lambda-CDM model, one can infer the Hubble constant from the size of the largest fluctuations seen in the Cosmic Microwave Background. A higher Hubble constant would imply a smaller characteristic size of CMB fluctuations, and vice versa. The Planck collaboration measure the expansion rate this way and determine H0 = 67.4 ± 0.5 (km/s)/Mpc . [10] There is a disagreement between the two measurements, the distance ladder being model-independent and the CMB measurement depending on the fitted model, which hints at new physics beyond our standard cosmological models.

The Hubble parameter is not thought to be constant through time. There are dynamical forces acting on the particles in the universe which affect the expansion rate. It was earlier expected that the Hubble parameter would be decreasing as time went on due to the influence of gravitational interactions in the universe, and thus there is an additional observable quantity in the universe called the deceleration parameter which cosmologists expected to be directly related to the matter density of the universe. Surprisingly, the deceleration parameter was measured by two different groups to be less than zero (actually, consistent with −1) which implied that today the Hubble parameter is converging to a constant value as time goes on. Some cosmologists have whimsically called the effect associated with the "accelerating universe" the "cosmic jerk". [11] The 2011 Nobel Prize in Physics was given for the discovery of this phenomenon. [12]

In October 2018, scientists presented a new third way (two earlier methods, one based on redshifts and another on the cosmic distance ladder, gave results that do not agree), using information from gravitational wave events (especially those involving the merger of neutron stars, like GW170817), of determining the Hubble Constant, essential in establishing the rate of expansion of the universe. [13] [14]

Measuring distances in expanding space Edit

At cosmological scales, the present universe is geometrically flat to within experimental error, [15] and consequently the rules of Euclidean geometry associated with Euclid's fifth postulate hold, though in the past spacetime could have been highly curved. In part to accommodate such different geometries, the expansion of the universe is inherently general relativistic. It cannot be modeled with special relativity alone: though such models exist, they are at fundamental odds with the observed interaction between matter and spacetime seen in our universe.

The images to the right show two views of spacetime diagrams that show the large-scale geometry of the universe according to the ΛCDM cosmological model. Two of the dimensions of space are omitted, leaving one dimension of space (the dimension that grows as the cone gets larger) and one of time (the dimension that proceeds "up" the cone's surface). The narrow circular end of the diagram corresponds to a cosmological time of 700 million years after the Big Bang, while the wide end is a cosmological time of 18 billion years, where one can see the beginning of the accelerating expansion as a splaying outward of the spacetime, a feature which eventually dominates in this model. The purple grid lines mark off cosmological time at intervals of one billion years from the Big Bang. The cyan grid lines mark off comoving distance at intervals of one billion light years in the present era (less in the past and more in the future). Note that the circular curling of the surface is an artifact of the embedding with no physical significance and is done purely for illustrative purposes a flat universe does not curl back onto itself. (A similar effect can be seen in the tubular shape of the pseudosphere.)

The brown line on the diagram is the worldline of Earth (or more precisely its location in space, even before it was formed). The yellow line is the worldline of the most distant known quasar. The red line is the path of a light beam emitted by the quasar about 13 billion years ago and reaching Earth at the present day. The orange line shows the present-day distance between the quasar and Earth, about 28 billion light years, which is a larger distance than the age of the universe multiplied by the speed of light, ct.

According to the equivalence principle of general relativity, the rules of special relativity are locally valid in small regions of spacetime that are approximately flat. In particular, light always travels locally at the speed c in the diagram, this means, according to the convention of constructing spacetime diagrams, that light beams always make an angle of 45° with the local grid lines. It does not follow, however, that light travels a distance ct in a time t, as the red worldline illustrates. While it always moves locally at c, its time in transit (about 13 billion years) is not related to the distance traveled in any simple way, since the universe expands as the light beam traverses space and time. The distance traveled is thus inherently ambiguous because of the changing scale of the universe. Nevertheless, there are two distances which appear to be physically meaningful: the distance between Earth and the quasar when the light was emitted, and the distance between them in the present era (taking a slice of the cone along the dimension defined as the spatial dimension). The former distance is about 4 billion light years, much smaller than ct, whereas the latter distance (shown by the orange line) is about 28 billion light years, much larger than ct. In other words, if space were not expanding today, it would take 28 billion years for light to travel between Earth and the quasar, while if the expansion had stopped at the earlier time, it would have taken only 4 billion years.

The light took much longer than 4 billion years to reach us though it was emitted from only 4 billion light years away. In fact, the light emitted towards Earth was actually moving away from Earth when it was first emitted the metric distance to Earth increased with cosmological time for the first few billion years of its travel time, also indicating that the expansion of space between Earth and the quasar at the early time was faster than the speed of light. None of this behavior originates from a special property of metric expansion, but rather from local principles of special relativity integrated over a curved surface.

Topology of expanding space Edit

Over time, the space that makes up the universe is expanding. The words 'space' and 'universe', sometimes used interchangeably, have distinct meanings in this context. Here 'space' is a mathematical concept that stands for the three-dimensional manifold into which our respective positions are embedded while 'universe' refers to everything that exists including the matter and energy in space, the extra-dimensions that may be wrapped up in various strings, and the time through which various events take place. The expansion of space is in reference to this 3-D manifold only that is, the description involves no structures such as extra dimensions or an exterior universe. [16]

The ultimate topology of space is a posteriori – something which in principle must be observed – as there are no constraints that can simply be reasoned out (in other words there can not be any a priori constraints) on how the space in which we live is connected or whether it wraps around on itself as a compact space. Though certain cosmological models such as Gödel's universe even permit bizarre worldlines which intersect with themselves, ultimately the question as to whether we are in something like a "Pac-Man universe" where if traveling far enough in one direction would allow one to simply end up back in the same place like going all the way around the surface of a balloon (or a planet like the Earth) is an observational question which is constrained as measurable or non-measurable by the universe's global geometry. At present, observations are consistent with the universe being infinite in extent and simply connected, though we are limited in distinguishing between simple and more complicated proposals by cosmological horizons. The universe could be infinite in extent or it could be finite but the evidence that leads to the inflationary model of the early universe also implies that the "total universe" is much larger than the observable universe, and so any edges or exotic geometries or topologies would not be directly observable as light has not reached scales on which such aspects of the universe, if they exist, are still allowed. For all intents and purposes, it is safe to assume that the universe is infinite in spatial extent, without edge or strange connectedness. [17]

Regardless of the overall shape of the universe, the question of what the universe is expanding into is one which does not require an answer according to the theories which describe the expansion the way we define space in our universe in no way requires additional exterior space into which it can expand since an expansion of an infinite expanse can happen without changing the infinite extent of the expanse. All that is certain is that the manifold of space in which we live simply has the property that the distances between objects are getting larger as time goes on. This only implies the simple observational consequences associated with the metric expansion explored below. No "outside" or embedding in hyperspace is required for an expansion to occur. The visualizations often seen of the universe growing as a bubble into nothingness are misleading in that respect. There is no reason to believe there is anything "outside" of the expanding universe into which the universe expands.

Even if the overall spatial extent is infinite and thus the universe cannot get any "larger", we still say that space is expanding because, locally, the characteristic distance between objects is increasing. As an infinite space grows, it remains infinite.

Density of universe during expansion Edit

Despite being extremely dense when very young and during part of its early expansion - far denser than is usually required to form a black hole - the universe did not re-collapse into a black hole. This is because commonly-used calculations for gravitational collapse are usually based upon objects of relatively constant size, such as stars, and do not apply to rapidly expanding space such as the Big Bang.

Effects of expansion on small scales Edit

The expansion of space is sometimes described as a force which acts to push objects apart. Though this is an accurate description of the effect of the cosmological constant, it is not an accurate picture of the phenomenon of expansion in general. [18]

In addition to slowing the overall expansion, gravity causes local clumping of matter into stars and galaxies. Once objects are formed and bound by gravity, they "drop out" of the expansion and do not subsequently expand under the influence of the cosmological metric, there being no force compelling them to do so.

There is no difference between the inertial expansion of the universe and the inertial separation of nearby objects in a vacuum the former is simply a large-scale extrapolation of the latter.

Once objects are bound by gravity, they no longer recede from each other. Thus, the Andromeda galaxy, which is bound to the Milky Way galaxy, is actually falling towards us and is not expanding away. Within the Local Group, the gravitational interactions have changed the inertial patterns of objects such that there is no cosmological expansion taking place. Once one goes beyond the Local Group, the inertial expansion is measurable, though systematic gravitational effects imply that larger and larger parts of space will eventually fall out of the "Hubble Flow" and end up as bound, non-expanding objects up to the scales of superclusters of galaxies. We can predict such future events by knowing the precise way the Hubble Flow is changing as well as the masses of the objects to which we are being gravitationally pulled. Currently, the Local Group is being gravitationally pulled towards either the Shapley Supercluster or the "Great Attractor" with which, if dark energy were not acting, we would eventually merge and no longer see expand away from us after such a time.

A consequence of metric expansion being due to inertial motion is that a uniform local "explosion" of matter into a vacuum can be locally described by the FLRW geometry, the same geometry which describes the expansion of the universe as a whole and was also the basis for the simpler Milne universe which ignores the effects of gravity. In particular, general relativity predicts that light will move at the speed c with respect to the local motion of the exploding matter, a phenomenon analogous to frame dragging.

The situation changes somewhat with the introduction of dark energy or a cosmological constant. A cosmological constant due to a vacuum energy density has the effect of adding a repulsive force between objects which is proportional (not inversely proportional) to distance. Unlike inertia it actively "pulls" on objects which have clumped together under the influence of gravity, and even on individual atoms. However, this does not cause the objects to grow steadily or to disintegrate unless they are very weakly bound, they will simply settle into an equilibrium state which is slightly (undetectably) larger than it would otherwise have been. As the universe expands and the matter in it thins, the gravitational attraction decreases (since it is proportional to the density), while the cosmological repulsion increases thus the ultimate fate of the ΛCDM universe is a near vacuum expanding at an ever-increasing rate under the influence of the cosmological constant. However, the only locally visible effect of the accelerating expansion is the disappearance (by runaway redshift) of distant galaxies gravitationally bound objects like the Milky Way do not expand and the Andromeda galaxy is moving fast enough towards us that it will still merge with the Milky Way in 3 billion years time, and it is also likely that the merged supergalaxy that forms will eventually fall in and merge with the nearby Virgo Cluster. However, galaxies lying farther away from this will recede away at ever-increasing speed and be redshifted out of our range of visibility.

Metric expansion and speed of light Edit

At the end of the early universe's inflationary period, all the matter and energy in the universe was set on an inertial trajectory consistent with the equivalence principle and Einstein's general theory of relativity and this is when the precise and regular form of the universe's expansion had its origin (that is, matter in the universe is separating because it was separating in the past due to the inflaton field) [ citation needed ] .

While special relativity prohibits objects from moving faster than light with respect to a local reference frame where spacetime can be treated as flat and unchanging, it does not apply to situations where spacetime curvature or evolution in time become important. These situations are described by general relativity, which allows the separation between two distant objects to increase faster than the speed of light, although the definition of "distance" here is somewhat different from that used in an inertial frame. The definition of distance used here is the summation or integration of local comoving distances, all done at constant local proper time. For example, galaxies that are more than the Hubble radius, approximately 4.5 gigaparsecs or 14.7 billion light-years, away from us have a recession speed that is faster than the speed of light. Visibility of these objects depends on the exact expansion history of the universe. Light that is emitted today from galaxies beyond the more-distant cosmological event horizon, about 5 gigaparsecs or 16 billion light-years, will never reach us, although we can still see the light that these galaxies emitted in the past. Because of the high rate of expansion, it is also possible for a distance between two objects to be greater than the value calculated by multiplying the speed of light by the age of the universe. These details are a frequent source of confusion among amateurs and even professional physicists. [19] Due to the non-intuitive nature of the subject and what has been described by some as "careless" choices of wording, certain descriptions of the metric expansion of space and the misconceptions to which such descriptions can lead are an ongoing subject of discussion within the fields of education and communication of scientific concepts. [20] [21] [22] [23]

Scale factor Edit

At a fundamental level, the expansion of the universe is a property of spatial measurement on the largest measurable scales of our universe. The distances between cosmologically relevant points increases as time passes leading to observable effects outlined below. This feature of the universe can be characterized by a single parameter that is called the scale factor which is a function of time and a single value for all of space at any instant (if the scale factor were a function of space, this would violate the cosmological principle). By convention, the scale factor is set to be unity at the present time and, because the universe is expanding, is smaller in the past and larger in the future. Extrapolating back in time with certain cosmological models will yield a moment when the scale factor was zero our current understanding of cosmology sets this time at 13.799 ± 0.021 billion years ago. If the universe continues to expand forever, the scale factor will approach infinity in the future. In principle, there is no reason that the expansion of the universe must be monotonic and there are models where at some time in the future the scale factor decreases with an attendant contraction of space rather than an expansion.

Other conceptual models of expansion Edit

The expansion of space is often illustrated with conceptual models which show only the size of space at a particular time, leaving the dimension of time implicit.

In the "ant on a rubber rope model" one imagines an ant (idealized as pointlike) crawling at a constant speed on a perfectly elastic rope which is constantly stretching. If we stretch the rope in accordance with the ΛCDM scale factor and think of the ant's speed as the speed of light, then this analogy is numerically accurate – the ant's position over time will match the path of the red line on the embedding diagram above.

In the "rubber sheet model" one replaces the rope with a flat two-dimensional rubber sheet which expands uniformly in all directions. The addition of a second spatial dimension raises the possibility of showing local perturbations of the spatial geometry by local curvature in the sheet.

In the "balloon model" the flat sheet is replaced by a spherical balloon which is inflated from an initial size of zero (representing the big bang). A balloon has positive Gaussian curvature while observations suggest that the real universe is spatially flat, but this inconsistency can be eliminated by making the balloon very large so that it is locally flat to within the limits of observation. This analogy is potentially confusing since it wrongly suggests that the big bang took place at the center of the balloon. In fact points off the surface of the balloon have no meaning, even if they were occupied by the balloon at an earlier time.

In the "raisin bread model" one imagines a loaf of raisin bread expanding in the oven. The loaf (space) expands as a whole, but the raisins (gravitationally bound objects) do not expand they merely grow farther away from each other.

Hubble's law Edit

Technically, the metric expansion of space is a feature of many solutions [ which? ] to the Einstein field equations of general relativity, and distance is measured using the Lorentz interval. This explains observations which indicate that galaxies that are more distant from us are receding faster than galaxies that are closer to us (see Hubble's law).

Cosmological constant and the Friedmann equations Edit

The first general relativistic models predicted that a universe which was dynamical and contained ordinary gravitational matter would contract rather than expand. Einstein's first proposal for a solution to this problem involved adding a cosmological constant into his theories to balance out the contraction, in order to obtain a static universe solution. But in 1922 Alexander Friedmann derived a set of equations known as the Friedmann equations, showing that the universe might expand and presenting the expansion speed in this case. [24] The observations of Edwin Hubble in 1929 suggested that distant galaxies were all apparently moving away from us, so that many scientists came to accept that the universe was expanding.

Hubble's concerns over the rate of expansion Edit

While the metric expansion of space appeared to be implied by Hubble's 1929 observations, Hubble disagreed with the expanding-universe interpretation of the data:

[. ] if redshifts are not primarily due to velocity shift [. ] the velocity-distance relation is linear the distribution of the nebula is uniform there is no evidence of expansion, no trace of curvature, no restriction of the time scale [. ] and we find ourselves in the presence of one of the principles of nature that is still unknown to us today [. ] whereas, if redshifts are velocity shifts which measure the rate of expansion, the expanding models are definitely inconsistent with the observations that have been made [. ] expanding models are a forced interpretation of the observational results.

[If the redshifts are a Doppler shift . ] the observations as they stand lead to the anomaly of a closed universe, curiously small and dense, and, it may be added, suspiciously young. On the other hand, if redshifts are not Doppler effects, these anomalies disappear and the region observed appears as a small, homogeneous, but insignificant portion of a universe extended indefinitely both in space and time.

Hubble's skepticism about the universe being too small, dense, and young turned out to be based on an observational error. Later investigations appeared to show that Hubble had confused distant H II regions for Cepheid variables and the Cepheid variables themselves had been inappropriately lumped together with low-luminosity RR Lyrae stars causing calibration errors that led to a value of the Hubble Constant of approximately 500 km/s/Mpc instead of the true value of approximately 70 km/s/Mpc. The higher value meant that an expanding universe would have an age of 2 billion years (younger than the Age of the Earth) and extrapolating the observed number density of galaxies to a rapidly expanding universe implied a mass density that was too high by a similar factor, enough to force the universe into a peculiar closed geometry which also implied an impending Big Crunch that would occur on a similar time-scale. After fixing these errors in the 1950s, the new lower values for the Hubble Constant accorded with the expectations of an older universe and the density parameter was found to be fairly close to a geometrically flat universe. [27]

However, recent measurements of the distances and velocities of faraway galaxies revealed a 9 percent discrepancy in the value of the Hubble constant, implying a universe that seems expanding too fast compared to previous measurements. [28] In 2001, Wendy Freedman determined space to expand at 72 kilometers per second per megaparsec - roughly 3.3 million light years - meaning that for every 3.3 million light years further away from the earth you are, the matter where you are, is moving away from earth 72 kilometers a second faster. [28] In the summer of 2016, another measurement reported a value of 73 for the constant, thereby contradicting 2013 measurements from the European Planck mission of slower expansion value of 67. The discrepancy opened new questions concerning the nature of dark energy, or of neutrinos. [28]

Inflation as an explanation for the expansion Edit

Until the theoretical developments in the 1980s no one had an explanation for why this seemed to be the case, but with the development of models of cosmic inflation, the expansion of the universe became a general feature resulting from vacuum decay. Accordingly, the question "why is the universe expanding?" is now answered by understanding the details of the inflation decay process which occurred in the first 10 −32 seconds of the existence of our universe. [29] During inflation, the metric changed exponentially, causing any volume of space that was smaller than an atom to grow to around 100 million light years across in a time scale similar to the time when inflation occurred (10 −32 seconds).

Measuring distance in a metric space Edit

In expanding space, distance is a dynamic quantity which changes with time. There are several different ways of defining distance in cosmology, known as distance measures, but a common method used amongst modern astronomers is comoving distance.

The metric only defines the distance between nearby (so-called "local") points. In order to define the distance between arbitrarily distant points, one must specify both the points and a specific curve (known as a "spacetime interval") connecting them. The distance between the points can then be found by finding the length of this connecting curve through the three dimensions of space. Comoving distance defines this connecting curve to be a curve of constant cosmological time. Operationally, comoving distances cannot be directly measured by a single Earth-bound observer. To determine the distance of distant objects, astronomers generally measure luminosity of standard candles, or the redshift factor 'z' of distant galaxies, and then convert these measurements into distances based on some particular model of spacetime, such as the Lambda-CDM model. It is, indeed, by making such observations that it was determined that there is no evidence for any 'slowing down' of the expansion in the current epoch.

Theoretical cosmologists developing models of the universe have drawn upon a small number of reasonable assumptions in their work. These workings have led to models in which the metric expansion of space is a likely feature of the universe. Chief among the underlying principles that result in models including metric expansion as a feature are:

  • the Cosmological Principle which demands that the universe looks the same way in all directions (isotropic) and has roughly the same smooth mixture of material (homogeneous).
  • the Copernican Principle which demands that no place in the universe is preferred (that is, the universe has no "starting point").

Scientists have tested carefully whether these assumptions are valid and borne out by observation. Observational cosmologists have discovered evidence – very strong in some cases – that supports these assumptions, and as a result, metric expansion of space is considered by cosmologists to be an observed feature on the basis that although we cannot see it directly, scientists have tested the properties of the universe and observation provides compelling confirmation. [30] Sources of this confidence and confirmation include:

  • Hubble demonstrated that all galaxies and distant astronomical objects were moving away from us, as predicted by a universal expansion. [31] Using the redshift of their electromagnetic spectra to determine the distance and speed of remote objects in space, he showed that all objects are moving away from us, and that their speed is proportional to their distance, a feature of metric expansion. Further studies have since shown the expansion to be highly isotropic and homogeneous, that is, it does not seem to have a special point as a "center", but appears universal and independent of any fixed central point.
  • In studies of large-scale structure of the cosmos taken from redshift surveys a so-called "End of Greatness" was discovered at the largest scales of the universe. Until these scales were surveyed, the universe appeared "lumpy" with clumps of galaxy clusters, superclusters and filaments which were anything but isotropic and homogeneous. This lumpiness disappears into a smooth distribution of galaxies at the largest scales.
  • The isotropic distribution across the sky of distant gamma-ray bursts and supernovae is another confirmation of the Cosmological Principle.
  • The Copernican Principle was not truly tested on a cosmological scale until measurements of the effects of the cosmic microwave background radiation on the dynamics of distant astrophysical systems were made. A group of astronomers at the European Southern Observatory noticed, by measuring the temperature of a distant intergalactic cloud in thermal equilibrium with the cosmic microwave background, that the radiation from the Big Bang was demonstrably warmer at earlier times. [32] Uniform cooling of the cosmic microwave background over billions of years is strong and direct observational evidence for metric expansion.

Taken together, these phenomena overwhelmingly support models that rely on space expanding through a change in metric. It was not until the discovery in the year 2000 of direct observational evidence for the changing temperature of the cosmic microwave background that more bizarre constructions could be ruled out. Until that time, it was based purely on an assumption that the universe did not behave as one with the Milky Way sitting at the middle of a fixed-metric with a universal explosion of galaxies in all directions (as seen in, for example, an early model proposed by Milne). Yet before this evidence, many rejected the Milne viewpoint based on the mediocrity principle.

More direct results of the expansion, such as change of redshift, distance, flux, angular position and the angular size of astronomical objects, have not been detected yet due to smallness of these effects. Change of the redshift or the flux could be observed by Square Kilometre Array or Extremely Large Telescope in the mid-2030s. [33]


Syncing up cosmic inflation and the Big Bang

The theory of cosmic inflation — first put forward in the 1908s by Alan Guth — suggests that the universe began its existence as a tiny speck of matter approximately hundred-billionth of the size of a proton. The speck was filled with extremely energetic matter, so energetic that the pressures within it drove a repulsive effect. The driving force of rapid inflation.

This repulsive force inflated this proto-matter outwards at an incredible rate. So fast, that it reached 10²⁶ its initial size in less than a period of a trillionth of a second. After this came the first phases of reheating which Kaiser and his team attempted to reconstruct.

The team believes that earliest phases of reheating should be marked by resonances, caused by one form of high-energy matter dominating and shaking back and forth in sync with itself across large expanses of space. This lead to the explosive production of new particles.

“That behaviour won’t last forever, and once it starts transferring energy to the second form of matter, its own swings will get more choppy and uneven across space,” Kaiser explains. “We wanted to measure how long it would take for that resonant effect to break up, and for the produced particles to scatter off each other and come to some sort of thermal equilibrium, reminiscent of Big Bang conditions.”

Working from initial conditions based on predictions made from measurements of the Cosmic Microwave Background (CMB) — the radiation leftover from an event known as the ‘last scattering’ 3.8 x 10⁵ after the Big Bang which permiates the entire universe— the team’s computer simulation presented a large lattice upon which multiple forms of matter could be mapped. The team then tracked how the energy and distribution of these forms of matter changed throughout space over time as they varied certain conditions.


34 Replies to &ldquoOur Early Universe: Inflation, or Something Totally Wacky?&rdquo

Ignorant question … wouldn’t Cerenkov radiation prevent matter from moving fast enough to create superluminal sound waves?

Cosmic inflation incorporates quantum field theory to explain the distribution of matter in the universe. Under normal circumstances, particles of matter and antimatter can pop into existence suddenly before colliding and annihilating each other instantly. These pairs flew apart so rapidly after the universe’s birth that they didn’t have a chance to recombine. The same theory applies to gravitons and antigravitons, which form gravity waves.

There the press release goes into speculative physics. Inflation becomes sensible in quantum field theory, so that step is usually accepted.

That inflation does some kind of “big rip” to sort out matter and antimatter is a large step AFIU. It would result in Alfvén type of cosmologically equi-distributed matter and antimatter volumes that would interact in their interfaces. That is not seen, and also not expected as there are CP parity violations making matter and antimatter on slightly different footing.

The antigraviton stuff is totally speculative, I think. The graviton is its own antiparticle what I know, same as the photon.

The discussion regarding the rapid separation of particles and antiparticles is an unfortunate description of the accepted mechanism for the origin of large scale structure that probably misrepresents what’s actually going on. A similar thing happens when we try to describe what’s actually happening at the event horizon of a black hole to create Hawking radiation — all the popular conceptualizations fall short of giving an accurate picture.

What’s really happening during inflation is that the vacuum fluctuations of all fields in the theory evolve along with the inflating spacetime. They are amplified and stretched to acausal scales because the comoving Hubble radius actually shrinks during inflation. Fluctuations in the inflaton field — the field that provides the energy density to drive the expansion — become physically manifested as perturbations in the curvature once they’ve reached super-horizon scales. But, the key is that *all* field fluctuate — even the gravitational field. These excitations are stretched by inflation to create a large scale spectrum of relic gravitational waves. Current observations (like ESA’s Planck satellite) are in hot pursuit of this gravitational wave signal, because it would provide strong support for an early period of inflationary expansion.

So, it’s not speculation. The theory of quantum fluctuations as the origin of cosmological perturbations that seeded large scale structure is sound and well-tested. The physical interpretation of these fluctuations — as virtual particle loops in a perturbative framework — is perhaps incorrect. It does serve as a popular illustration that is probably close to getting it right.

I sort of let that statement about anti-gravitons fly. The graviton does not have an antiparticle, just as the photon does not have an antiparticle. Photons and gravitons have quantum numbers pertaining to their spin and polarization. They carry no gauge charges, and that is what defines an antiparticle. The gluon, a gauge particle for QCD, carries two color charges for QCD, and thus it does have an antiparticle. The gluon though can’t exist as a bare particle, but only within a quark plasma or vacuum bubble, or as a gluon chain or glue-ball (a plasma of gluons with zero net color charge). A gluon chain can have the same quantum state as a graviton, and in fact in the AdS

CFT physics it is a graviton.

I sort of let that statement about anti-gravitons fly. The graviton does not have an antiparticle, just as the photon does not have an antiparticle. Photons and gravitons have quantum numbers pertaining to their spin and polarization. They carry no gauge charges, and that is what defines an antiparticle. The gluon, a gauge particle for QCD, carries two color charges for QCD, and thus it does have an antiparticle. The gluon though can’t exist as a bare particle, but only within a quark plasma or vacuum bubble, or as a gluon chain or glue-ball (a plasma of gluons with zero net color charge). A gluon chain can have the same quantum state as a graviton, and in fact in the AdS

CFT physics it is a graviton.

Ouch! Not only is it sound waves in in perfect fluids as a first approximation, there are non-obvious ties between the model and causality:

“Consider a perfect fluid in a Special or General Relativity context, with equation of state p/c2 = w&rho relating the pressure p to the energy density &rho.

If w is constant, the speed of sound is given by c_s^2/c^2 =1/c^2 dp/d&rho = w , (1) and if w is slowly varying, this is still a good approximation.

Thus one can get c_s^2 > c^2 easily: simply set w > 1 in the macroscopic description, i.e., presume that p/c^2 > &rho > 0. Then the speed of sound cones lie outside the speed of light cones in all directions1 at all events, and fluid waves can propagate at speeds up to and including this superluminal speed of sound. Of course this is far from ordinary matter. It does not accord with anything so far experienced in the real world. But does it cause serious problems in terms of causal violations or Lorentz invariance, considered macroscopically?”

And the paper goes on to argue that the arguments for superluminal signals are wrong. The problem is how to choose the proper frame:

“No physical violation is involved in this aspect of the proposal. Lorentz-invariant theories not only can have, but to model some aspects of reality must have, non-Lorentz-invariant solutions (otherwise normal sound waves would not be
allowed either). The invariance then maps one solution to another different one, rather than to itself. In other observers’ rest frames, in this case, causal limits will again be determined by the speed of sound cone of the fluid rather than the light cone. There is no way to send a signal into one’s past provided no signal, and no observer, travels outside the sound cone, so this cone is itself the causal
limit cone.”

“As in the case of varying speed of light theories (see, e.g., Ref. [26] for a discussion), one must take physics as a whole into account whenever proposing theories of superluminal speed of sound one cannot just tinker with some part of physics without thinking of the consequences for the whole.”

Can someone tell me,where I can get a complete description of string theory?

The following website is a decent overview of the topic

To Amy Teitel:
I’m sure you meant to say “stars, planets (not planes) and galaxies” didn’t you? Please always check your SPELLING before publishing your articles. I’ve seen this kind of mistake in several of your previous commments and this generates unnecessary confusion among the readers. Thank you.

To Torbjorn Larsson:
I believe that many UniverseToday readers are aware by now that you hold a broad knowledge in astrophysics from your regular comments in this page. However, it would be wise of you just to keep those complicated mathematic equations only to yourself. Most readers certainly enjoy a detailed explanation clarifying the issuein general but couldn’t care less about meaningless abstract equations on and off all the time within the context of your comments. Thank you.

With all due respect, some of us do understand Torbjorn’s equations and appreciate his input. It’s just a shame Disqus doesn’t provide nice formatting for equations.

I’m sure there are plenty of people for whom his posts are too in-depth, but my advice to them would be to skim his posts – not a particularly difficult task for the discerning comment-reader.

If you really do have an issue with his posts, consider reporting them to a moderator – I’m sure the UT staff (which you saw fit to insult for easily-made spelling errors) will be happy to ignore your complaints.

LOL! Love that last sentence!

…but couldn’t care less about meaningless abstract equations on and off all the time within the context of your comments. Thank you

please don’t ask him to be vapid and topical. some people are engaged with the topic and enjoy an in-depth answer.

Junovidor, sigh, in a way, you’re right but you certainly came across as a bit of a blowhard. I can’t read Torbjorn’s equations but I do wish he’d try to make a point with them. As for spelling mistakes, this is a blog. I guess you missed that. Not a published, peer-reviewed text. Take for instance, your spelling of comments…with an extra ‘m’. We all do it. Just takes a slightly more open approach to appreciate the articles notwithstanding.

The correct spelling of the word comment is with TWO ‘m’ so there’s no extra ‘m’ as you mentioned in your reply. Just check a dictionary for that matter.
Comet is the word with one ‘m’ only and if fits perfectly within the range of the subjects discussed within UniverseToday’s page.
Besides my comments were not intended to offend anyone but rather to call the attention to a couple of things that when happen more often than expected they just divert one’s attention and pleasure from the reading. Anyway, no hard feelings on my side.

Well, maybe I’m seeing double but in your initial comment, there are 3, count ’em, THREE m’s in the word comment in your first paragraph. Just sayin’. No hard feelings!

Torbjorn Larsson and lcrowell’s equations are actually very interesting and adds to the forum. Maybe you are not interested, but that does not mean that everyone is not interested.

I agree with you. Often LC’s and Torbjorn’s math is hard to get my head around, but it is great gray matter exercise trying. Also, when it is over my head, it is very easy to skip past to the explanations. Keep that science stuff coming!

Pointing out errors is cool, but be nice about it. Besides, I don’t think anyone would read ‘planes’, and think “wow there are planes in space??”.

Also, you can do what I do if Torbjorn’s mathematics are too complex for you, and that is skip over his post. I don’t get why you have to go out of your way to make a big deal about it. Besides, I know there are a lot of mathematically inclines readers here.

If these things annoy you, though, you should be aware that your post is way more annoying than anything you mentioned. So, you know, maybe just chill.

Can anyone explain, in a lay-way, how sound waves and speed of sound come into this? I was surprised to see them pop up in this discussion at all!

The universe was very compact back then. So compact that one could regard is as one big solid/fluid/gaseous ball. Matter bumping into each other is basically sound.

One of the interesting things about waves is that although the matter bumping into each other does not violate the speed of light, the wave itself can exceed that speed. (e.g. AC current, the electrons stay in place within the wave and do not really move, but the signal as a wave moves very very fast. The same for water waves. )

An ‘oops’ here…no, waves of all sorts can NOT exceed the speed of light. That would violate the ‘neither matter nor information’ specification in the TofR. Because of electrons inhabiting probabilities, there is the slight possibility of FTL electron placement likelihoods on atoms cruising at near light speeds but no, normally, it jes don’ happen. Anyone else weigh in on how sound gets in here?

It has to be pointed out that inflation does not tell us about the origin of the universe. Inflation occurred at a time when the fabric of spacetime was classical or not quantum mechanical. There was then a prior episode where spacetime itself was quantized in some manner. This prior period is a subject of theoretical research.

The basic equation of cosmology is the Friedmann-Lemaitre-Robertson-Walker (FLRW) equation. This can be understood using Newtonian mechanics and gravity. Some people may object to this as overly mathematical, but if you are to have an interest in astronomy a familiarity with Newtonian mechanics is advised. The second law of mechanics by Newton told us that the force on a body is equal to its mass times the acceleration F = ma. The law of gravity is that the force on a body of mass m by another mass M is

which is equal to the force the mass m exerts on the other mass M separated by a radial distance r. So we put them together to get the full equation

where the acceleration is a = dv/dt — a little differential calculus. The work-energy theorem of elementary physics tells us that the force displaced through a distance is equal to the kinetic energy. So we then write this as ?F•dr, and use dr = (dr/dt)dt (chain rule in calculus) = -vdr (motion is negative r direction = attraction) and get

We recognize (1/2)v^2 as the “energy per mass” and this equals the change in the potential.

We can convert this equation to a cosmological equation. We consider the distance to a galaxy r = ax, where a is a scale factor which is used to slide the ruler measure x to give a distance r. We then have that all of the dynamics are contained in the scale factor, so v = (da/dt)x, which we write as a’x, the prime denotes a time derivative. The above equation becomes

The mass M is all of the mass of galaxies in some volume V = (4?/3)r^3 = (4?/3)(ax)^3. So the mass M is equal to the density of mass (mass-energy) in a volume or M = ?V and this equation is then

This is the FLRW equation for cosmology, where motion of the space is given by a scale factor a, which evolves with time. This is a Hamiltonian (energy equation) which governs dynamics.

The Hamiltonian for the scale factor a in FLRW with general relativity is more generally

which has different solutions for different densities ?. The factor – k/a^2 is a curvature term, where k = 1 means the space is a sphere, k = 0 it is flat and infinite, and k = -1 means it is a hyperbolic saddle shape. This leads to a dynamical equation of motion when considered as a Hamiltonian for k = 0

For a constant density ? = ?/(8?G), ? the cosmological constant, this reflects some constant mass-energy in the vacuum. This solution is an exponential solution

which is the inflationary solution. The space expands exponentially with time, which includes an accelerated expansion that also increases. The current state of the universe is of this nature, where there is some dark energy (vacuum energy or quantum zero point energy) which has a larger density than the mass-energy of stuff like galaxies and even dark matter.

The early inflationary period was an epoch with a far larger vacuum energy, as much as 10^ <110>times what it currently is. The exponential expansion went through 63 efolds, or for time such that exp(sqrt<8?G?/3>t)

e^<63>. This stretched out the space enormously, which means the earlier initial data of the universe prior to inflation are stretched out to huge redshift values.

Very nice to derive the Friedmann equation using just Newtonian mechanics!

1) In the “(a’x)^2 = GM/ax.” isn’t a factor of (1/2) at the left missing?

2) What you want to say when you call “Hamiltonian” the equation “(a’/a)^2 = 8?G?/3 “? The equation above does not look like a Hamiltonian function (for all readers: a Hamiltonian function is a function of momentum and position that is equal to the sum of kinetic and potential energy)

3)Dark energy, with its constant energy density in an expanding Universe isn’t the most flagrant violation of conservation of energy?

I did indeed miss a factor of a half, which I changed. This did not affect the rest of the calculation. I also wrote the Hamiltonian explicitly with H = 0.

As for energy violation, this equation has to do with a conservation of energy density. It is also distributed on an infinite space R^3. So the issue of energy conservation is something that is sort of hidden away. Conservation of energy is “funny” in general relativity. It is only explicitly defined if there is a symmetry of the spacetime along the time direction which defines a constant timelike momentum vector. This symmetry is called a Killing vector. Spacetime cosmologies do not explicitly have this. To get into the matter more deeply does require cranking out the curvatures and field equations of general relativity. The stress-energy tensor contains a term ? + p, where the pressure term for the de Sitter spacetime obeys p = -?. This negative pressure can in a sense be thought to remove work, think pdV = NkdT = dS = differential of entropy, which prevents the generation of energy from the constant vacuum energy density under expansion of space.

It is rather remarkable that FLRW spacetime physics can be discussed in a first order model by just using Newtonian mechanics. In fact if Newton had been aware of the expanding universe he could have worked this up.

If conservation laws come from symmetries, like conservation of energy from time “translations” in spacetime (if I understand this right, this mean that the geometry of space does not changes in the timelike direction)…

… and since gravitational interactions are not in general time-symmetric (except special cases like the Minkowski, Scharwrzchild and Kerr spacetimes that does not change in the timelike direction)…

… then in the general case gravity violates conservation of energy?

…and the reason we do not have measured anything like that in the lab or or observed it in the near universe is because the gravitational interaction is very weak and in common astrophysical situations the Scharwzchild solution is an excellent approximation to the gravitational field of stars, planets,etc and this spacetime does not change with time?

Radial symmetric gravity fields do have timelike isometries or Killing vectors. So black holes and more ordinary gravity fields do conserve energy. Gravity waves manage to also conserve energy. There is a bit of mathematics involved with curvature tensors. The Riemann curvature tensor is the sum of the Weyl curvature and the Ricci curvature. How this sum is done is a bit complicated. The Weyl curvature is a form of curvature which preserves volumes. If you have a sphere in two dimensions that encloses a volume the Weyl curvature is such that as the spacetime evolves, or that points on and inside this sphere moves on some geodesic the sphere may be deformed, but the volume remains constant. Think of there being test masses at every point on the sphere and take the limit the mass goes to zero. The Ricci curvature does not preserve volume. If you put a spherical shell of dust around the Earth, a star or black hole, the dust particles will fall inwards and contract the volume they inscribe.

There is an elegant mathematics developed by Penrose, Petrov and Pirani. The Weyl curvature is a type of operator or machine that acts on Killing vectors to give eigenvalues. The algebra of this system describes the PPP system of spacetime types. Some of these solution types are the D-types for black holes, the I, II, and III types (Robinson-Trautman spaces) and type N spaces for gravity waves. This physically may be thought of as the near field term, the black hole, the far field term as gravity waves and intermediary symmetries. This is comparable to the near and far field solutions to Maxwell’s equations.

Cosmologies describe the expansion of space, so if you placed a spherical shell of dust particles in a cosmology that are not gravitationally bound to each other that shell will expand its volume. So there is no Weyl curvature which gives a timelike isometry. With a spherical symmetric gravity, say a black hole, the solution is defined by regions that do not enclose the mass. This is similar in technique to using Gaussian surfaces in electromagnetism.

Symmetry is important in defining a conserved quantity. Emmy Noether worked a general theorem for this early last century. In classical mechanics momentum may be thought of as the generator of a position change. If the space has homogeneous symmetry so it has no distinguishing properties from point to point momentum is conserved. The same holds with time, if time translations are invariant under rescaling or reparameterization then energy is conserved. If the space is isotropic under rotations then angular momentum is conserved. These define conjugate pairs of observables or variables:

which in classical mechanics defines Poisson bracket structures and in a quantized setting the commutators of variables which give their uncertainty relationships.

1) a group of massive and dense objects bound by gravity, like a stellar cluster or two black holes orbiting each other?

2) micro black holes resulting from particle colliders (or cosmic rays)?

3)An accreting black hole (the mass is increasing, so the hole is growing with time)?

4)A small evaporating black hole (near the end of its life) that is shrinking rapidly?

In those cases does not the spacetime is NOT time-invariant, so energy isn’t conserved, or I am missing something?

In those cases mass-energy is probably conserved, though these systems have less symmetry to work with so it is hard to define an ADM mass. A system of two black holes in an orbit is not integrable, and only approximately so if the two black holes are far apart and the system approaches a near Newtonian situation. Two tightly orbiting black holes will produce gravity waves and this is more complex than a Newtonian two-body system that is integrable. In Newtonian mechanics n-body systems for n > 2 are not solvable in closed form, and in general relativity systems are not generally integrable for n > 1. One has to use perturbation methods (parameterized post-Newtonian) or numerics in order to approximate these systems.

Quantum black holes likely also conserve mass-energy, though this system is only understood with classical back reactions used to treat the black hole. A fully quantum mechanical or quantum field theoretic treatment of black holes that are a few Planck masses is a fascinating topic in conjunction with string theory and quantum information. I will bow out of discussing that here, for that is a very deep and abstract subject. Quantum black holes which are thought to be maybe produced in the LHC, are not really black holes. They are QCD systems with certain correspondences with AdS-Schwarzschild spacetimes. This means gluon chains or quark-gluon plasmas can have small quantum amplitudes which correspond to black holes. The LHC can’t produce a real certified black hole, and all the flapdoodle last decade over this was ridiculous nonsense.

“Under normal circumstances, particles of matter and antimatter can pop into existence suddenly before colliding and annihilating each other instantly”
As implied here, the Universe seems to do everything in balance, therefore I would expect an equal number of matter and anti-matter particles forming. Yet as I understand it, the theory to explain why our Universe exists as is would require extra normal matter being formed. This imbalance just doesn’t seem right, but where could the remainder of the anti-matter have gone? Could there be an anti-matter universe out there somewhere?

human also have trillion of cells and chromosomes of different variations

imagine how we’re small in this universe and multiply it by 10 to the power billion so many zeros



Join our 836 patrons! See no ads on this site, see our videos early, special bonus material, and much more. Join us at patreon.com/universetoday


Homogeneous static universe

Consider an infinite homogeneous static universe with a constant mass density $ ho$. If we were to calculate the force on a test particle located at a certain point accoring to Newtons law of gravity. It would be logical to conclude from a symmetry argument that the force on the particle should be zero. But is this true? For as we know, the force on that particle depends on how we are adding up the contributions of mass in the universe. So one way to calculate the force will give a 0 net force on the particle, and another way of doing it will give a net force in a certain direction. In fact, depending on how we do the sum, we can come to the conclusion that the particle can be subject to any force, even an infinite force.

So how is this problem resolved? From a mathematical standpoint it should have a solution.


Why Cosmic Inflation's Last Great Prediction May Fail

Image credit: Bock et al. (2006, astro-ph/0604101) modifications by E. Siegel.

One of the greatest scientific achievements of the early 20th century was the discovery of the expanding Universe: that as time goes on, distant galaxies are receding from us, as the space between us expands according to Einstein's General Relativity. In the mid-20th century, a great idea was put forth, that if the Universe is getting bigger and cooler today, then it was smaller, hotter and denser in the past: the Big Bang. The Big Bang made a few extra predictions:

  • there would be a great cosmic web of structure, with small, medium and large-scale structures clumped together in certain patterns,
  • there would be a leftover glow of radiation from the early Universe, that's cooled to just a few degrees above absolute zero,
  • and there would be a specific set of ratios for the lightest elements in the Universe, for the different isotopes of hydrogen, helium and lithium.

Image credit: NASA / WMAP science team, of the discovery of the CMB in 1965 by Arno Penzias and Bob . [+] Wilson.

In the 1960s and 1970s, these predictions were all confirmed to varying degrees of accuracy, and the Big Bang became overwhelmingly accepted as the leading theory of where everything we can perceive and detect in the Universe originated. But there were a few questions that were unanswered when it came to the Big Bang, a few phenomena that were completely unexplained within this framework.

  1. Why was the Universe the exact same temperature everywhere?
  2. Why was the Universe so spatially flat why did the expansion rate and the matter/energy density balance each other so perfectly well?
  3. If the Universe achieved such high energies early on, why haven't we seen the stable relics that should be spread throughout the Universe from it?

Image credit: E. Siegel, from his book Beyond The Galaxy. If these three different regions of space . [+] never had time to thermalize, share information or transmit signals to one another, then why are they all the same temperature?

If the Universe were expanding according to the rules of General Relativity, there's no reason to expect that regions of space separated by distances greater than the speed of light were connected, much less the same exact temperature. If you take the Big Bang all the way back to its logical conclusion -- to an infinitely hot, dense state -- there's no way to come up with answers to these questions. You just have to say, "it was born this way," and from a scientific point of view, that's wholly dissatisfying.

But there's another option. Perhaps, instead of the Universe just being born at the moment of the Big Bang with these conditions, there existed an early stage that set up these conditions and the hot, dense, expanding and cooling Universe that gave rise to us. This would be a job for theorists: to figure out what possible dynamics could set the stage for the Big Bang with these conditions to occur. In 1979/1980, Alan Guth put forth the revolutionary idea that would change the way we thought about our Universe's origins: cosmic inflation.

Image credit: Alan Guth’s 1979 notebook, tweeted via @SLAClab, from . [+] https://twitter.com/SLAClab/status/445589255792766976.

By postulating that the Big Bang was preceded by a state where the Universe wasn’t filled with matter-and-radiation, but rather by a huge amount of energy inherent to the fabric of space itself , Guth was able to solve all of these problems. In addition, as the 1980s progressed, further developments occurred that made it clear that, in order for inflationary models to reproduce the Universe we saw:

  • to fill it with matter-and-radiation,
  • to make the Universe isotropic (the same in all directions),
  • to make the Universe homogeneous (the same in all locations),
  • and to give it a hot, dense, expanding state,

there were quite a few classes of models that could do it, as developed by Andrei Linde , Paul Steinhardt, Andy Albrecht, with additional details worked out by people like Henry Tye, Bruce Allen, Alexei Starobinskii, Michael Turner, David Schramm, Rocky Kolb and others. But the simplest ones -- the ones that solved the problem and had the fewest free parameters -- fell into just two categories.

Images credit: Ethan Siegel, with google's graph tool. The two simplest classes of inflationary . [+] potentials, with chaotic inflation (L) and new inflation (R) shown.

There was new inflation, where you had a potential that was very flat at the top and that the inflaton field could “roll down, slowly” to reach the bottom, and there was chaotic inflation, where you had a U-shaped potential that, again, you’d roll down slowly.

In both these cases, your space would expand exponentially, be stretched flat, have the same properties everywhere, and when inflation came to an end, you’d get back a Universe that very much resembled our own. In addition, you’d also get six extra, new predictions out, all of which had not yet been observed at the time.

  1. A Perfectly Flat Universe . Because inflation causes this rapid, exponential expansion, it takes whatever shape the Universe happened to be and stretches it to tremendous scales: to scales much, much larger than what we can observe. As a result, the part that we see looks indistinguishable from flat, the same way that the ground outside your window may look flat, but it's actually part of the entire, curved Earth. We just can't see enough to know what the true curvature actually is.
  2. A Universe with fluctuations on scaleslarger than lightcould’ve traveled across. Inflation — by causing the space of the Universe to expand exponentially — causes what happens on very small scales to get blown up to much larger ones. This includes quantum fluctuations, which normally fluctuate in-place in empty space. But during inflation, thanks to the rapid, exponential expansion, these small-scale energy fluctuations get stretched across the Universe onto gigantic, macroscopic scales that should wind up spanning the entire visible Universe!
  3. A Universe with a maximum temperature that's not arbitrarily high. If we could take the Big Bang all the way back to arbitrarily high temperatures and densities, we'd find evidence that the Universe once reached at least the temperature scale at which the laws of physics break down: the Planck scale, or around energies of 10 19 GeV. But if inflation occurred, it must have occurred at energy scales lower than that, with the result that the maximum temperature of the Universe post-inflation must be some energy scale lower than 10 19 GeV.
  4. A Universe whose fluctuations wereadiabatic, or of equal entropyeverywhere . Fluctuations could have come in different types: adiabatic, isocurvature, or a mixture of the two. Inflation predicted that these fluctuations should have been 100% adiabatic, which means that detailed measurements of the types of quantum fluctuations the Universe started off with should reveal signatures in the microwave background and in large-scale cosmic structure.
  5. A Universe where the spectrum of fluctuations was just slightly less than having a scale invariant (ns < 1) nature . This is a big one! Sure, inflation generically predicts that these fluctuations should be scale-invariant. But there’s a slight caveat, or a correction to that: the shape of the inflationary potentials that work — their slopes and concavities — affect how the spectrum of fluctuations departs from perfect scale invariance. The two simplest classes of inflationary models, new inflation and chaotic inflation, give predictions for ns that typically cover the range between 0.92 and 0.98.
  6. And finally, a Universe with a particular spectrum of gravitational wave fluctuations . This is the last one, and the only major one that hasn’t yet been confirmed. Some models — like the simple chaotic inflation model — give large-magnitude gravitational waves (the kind that could’ve been seen by BICEP2), while others, like the simple new inflation model, can give very small-magnitude gravitational waves.

Image credit: ESA and the Planck Collaboration.

Over the past 35 years, we've made incredible, all-sky measurements of the fluctuations in the cosmic microwave background, from scales as large as the entire visible Universe down to angular resolutions of a mere 0.07 ° . As space-based satellites became more and more capable over time -- COBE in the 1990s, WMAP in the 2000s, and now Planck in the 2010s -- we've gained incredible insight into the Universe when it was less than 0.003% its current age.

Image credit: Sloan Digital Sky Survey (SDSS), including the current depth of the survey.

Similarly, large-scale structure surveys have become incredibly ubiquitous, with some covering the entire sky and others covering huge patches at even greater depths. With the Sloan Digital Sky Survey providing the best modern data sets, we've been able to confirm the first five of these six predictions, placing inflation on a very firm footing.

  1. The Universe is observed to be exactly spatially flat -- with a curvature of 1, exactly -- to a precision of 1.0007 ± 0.0025, as best shown by the large-scale structure of the Universe.
  2. The fluctuations in the cosmic microwave background show a Universe with scales that extend up to and beyond the horizon of the observable Universe.
  3. The maximum temperature that our Universe ever could have achieved, as shown by the fluctuations in the cosmic microwave background, is only

That last number, ns, is really, really important if we want to look for the sixth and final prediction of inflation: gravitational wave fluctuations.

Image credit: NASA / WMAP science team.

The spectrum of fluctuations in the microwave background looks like the squiggled line, above, today, but it grew out of the interplay of all the different forms of energy over time, from the end of inflation until the Universe was 380,000 years old. It grew from the density fluctuations at the end of inflation: the horizontal line. Only, that line isn't quite horizontal there's a slight tilt to the line, and the slope represents the departure of spectral index, ns , from 1.

The reason this is important is that inflation makes a specific prediction for a special ratio (r), where r is the ratio of the gravitational wave fluctuations to the scalar spectral index, ns . For the two main classes of inflationary models -- as well as in other models -- there is a huge disparity in what r is predicted to be.

Image credit: Kamionkowski and Kovetz, to appear in ARAA, 2016, from . [+] http://lanl.arxiv.org/abs/1510.06042. Results presented at AAS227.

For chaotic models, r is typically very large: no smaller than about 0.01, where 1 is the maximum conceivable value. But for the new inflation models, r can vary from as large as about 0.05 down to tiny, minuscule numbers like 10 -60 ! But these various r values are often correlated with specific values for ns, as you can see above. If ns turns out to actually be the value that we've best measured it to be right now -- 0.968 -- then the simplest models you can write down for both chaotic inflation and new inflation only give values of r that are bigger than about 10 -3 .

As reported by Mark Kamionkowski in his talk at AAS (and based on his paper here), all the simple models one can write down, for the measured value of ns, means that r can't range from 10 -60 to 1 it can only range from 10 -3 to 1. And this could be very, very problematic in short order, because there are a whole host of ground-based surveys that are measuring the type of signal that can measure r, already constrained to be less than 0.09, if it's greater than or equal to

Image credit: Kamionkowski and Kovetz, to appear in ARAA, 2016, from . [+] http://lanl.arxiv.org/abs/1510.06042. Results presented at AAS227.

The gravitational wave fluctuations produced by inflation cause both E-mode and B-mode polarizations, but the density fluctuations (and ns) show up in only the E-modes. So if you measure the B-mode polarizations, you can learn about the gravitational wave fluctuations and determine r!

This is what experiments such as BICEP2, POLARBEAR, SPTPOL and SPIDER, among others, are working to measure right now. There are B-mode polarization signals caused by lensing effects, but if the inflationary fluctuations are larger than r

0.001, they'll be able to be seen in 5-10 years by the experiments running and planned to run over that time.

Image credit: Planck science team.

If we find a positive signal for r, either a chaotic inflation (typically if r > 0.02) or a new inflation (typically for r < 0.04, and yes, there's overlap) model could be strongly, strongly favored. But if the measured value for ns stays what it's thought to be right now, and after a decade we've constrained r < 10 -3 , then the simplest models for inflation are all wrong. It doesn't mean inflation is wrong, but it means inflation is something more complicated than we first thought, and perhaps not even a scalar field at all.

If nature is unkind to us, the last great prediction of cosmic inflation -- the existence of primordial gravitational waves -- will be elusive to us for many decades to come, and will continue to go unconfirmed.

The preceding article was partially based on information obtained during the 227th American Astronomical Society meeting, some of which may be unpublished.