Astronomy

Dark matter emitting EM radiations beyond our currently known spectrum?

Dark matter emitting EM radiations beyond our currently known spectrum?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Could it be possible that dark matter is emitting EM radiations on a scale that is not known or perceivable by humans yet and undetectable by man-made instruments?

A type of radiation that our instruments were just not designed to pick up on? Being insensitive and not advanced enough? Thus we do not yet have all the pieces to solve the puzzle?

Edit: We need to remember that there are infrared and gamma rays on either end of the spectrum and we are aware that dark matter is definitely not emitting visible light, or any of the other type of EM waves. Considering that gamma rays were discovered roughly only a hundred years ago, isn't it possible that the spectrum could be further extended on either side and that we're still not aware of it?


The namesake of dark matter is because it does not emit electromagnetic radiation. Humans have built detectors on all ranges on the electromagnetic spectrum and a single particle will likely not emit wavelengths that are very large.

However, dark matter may produce light in collisions with other dark matter particles or normal matter. That is the premise of large scintillator detectors.


If it could produce EM radiation, that means it could interact via EM fields. That means there would be internal friction in the dark matter clouds, because friction ultimately boils down to electric forces.

But the dark matter distributions we're observing seem to indicate that there is no friction between dark matter particles, whatever they are.

Ergo, dark matter cannot produce EM radiation at any frequency or wavelength.


Dark matter emitting EM radiations beyond our currently known spectrum? - Astronomy

Paper Information

Journal Information

International Journal of Theoretical and Mathematical Physics

p-ISSN: 2167-6844 e-ISSN: 2167-6852

Received: Apr. 13, 2021 Accepted: Apr. 30, 2021 Published: May 15, 2021

The God Equation: Theory of Everything

Research Scientist, Imo State, Nigeria

Correspondence to: Prince Jessii, Research Scientist, Imo State, Nigeria.

Email:

Copyright © 2021 The Author(s). Published by Scientific & Academic Publishing.

This work is licensed under the Creative Commons Attribution International License (CC BY).
http://creativecommons.org/licenses/by/4.0/

Finding an ultimate theory/master theory that fully explains all physical aspects of this universe, is a long-standing goal in physics. However, after reading this paper, it is expected that the ultimate theory (Theory of Everything) which is also said to be (the merger of quantum theory with general relativity) shouldn’t be categorized as a “yet to be achieved” task in physics rather the physics community should acknowledge this scientific paper in order to move forward.In 2019, the “Theory of Everything” was discovered by Prince Jessii. From 2019-2021, this theory has been developing gradually, resulting to its complete form. This paper is the “Theory of Everything” itself, explaining in details the creation/origination of all theories and equations in physics. A key component of the Universe (Space-time) gave rise to the Ultimate Physics Equation (The God Equation) where all particles, physical constants, equations/laws originates from. Also, mathematical demonstrations/calculations are done to show that these aforementioned originates from a space-time parameter.

Keywords: Space-time, Dark Energy, Theory of Everything, Dark Matter, Physical Constants, Gravity, Quantum


Examples Of Electromagnetic Radiation In Everyday Life

Here are 10 examples of electromagnetic radiation which we come across daily and the harmful effects that result from it:

1. Visible Light Waves

Let’s start with the most visible type of electromagnetic radiation: visible light waves. This type of radiation derives from what our eyes perceive as a clear, observable field of view.

We receive EM radiation as information over various wavelengths and frequencies through waves or particles, which ultimately form the electromagnetic spectrum. Other forms of visible light waves come from artificial illumination and photography devices.

An excellent example of light waves radiation that you can see is the light from the screen that you look at while reading this information.

One of its most important properties is color, which is also an inherent feature of the human eye. Objects do not have color, but the light that they reflect pass the filter of “cones” in our eyes, which are cells sensitive to the EM spectrum, and which transform white light into color.

The effects of visible light radiation vary according to their range and exposure. On one hand, visible light waves are responsible for life on earth, as they boost natural processes like photosynthesis. On the other hand, they can cause photodegradation and thermal damage.

In humans, light waves ensure proper biological functioning and stable mental health. In fact, reduced exposure to natural light prevents the synthesis of neurotransmitters and may lead to depression, Alzheimer’s disease, and even brain damage.

Otherwise, too much light wave radiation produces macular degeneration in your eyes and temporary skin conditions.

2. Radio Broadcasting Waves

Radio waves are the basic frequencies used in communications. They are often distributed by a transmitter and vary in wavelength, which may be anywhere between:

  • 1-2 km – also known as long waves and used in classic radio broadcasting
  • 100 m – generally known as medium waves that are used for AM radio broadcasting
  • 2 m – Very High Frequency (VHF) waves generally used in FM radio broadcasting
  • 1 m or less – Ultra High Frequency (UHF) waves used for police and military radio communications

Despite their societal benefits, radio waves may have harmful consequences on human health. The World Health Organization has often referred to radio waves radiation as possibly carcinogenic. Furthermore, intense exposure to radio waves may lead to leukemia and other health disorders.

The jury is still out on whether radio waves radiation produces adverse side effects like cancer for sure.

What we know so far is that our bodies take in these waves as if they were the antennas of radio sets. Long-term exposure increases thermal radiation and may trigger headaches, vision problems, sleep disruption and even memory loss.

3. Cell Phone Radiation Waves

Another form of communication that uses radio waves is cell phone transmission. Whether you have a smartphone or an older non-intuitive model of mobile phone, you are bound to use it at least once per day.

These gadgets are lifestyle commodity that more than 60% of the world population uses every day. In fact, it is currently estimated that the number of cell phone users exceeds 5 billion users globally.

It has been suggested that the radiation produced by cell phones could lead to cancer and the formation of terminal brain tumors.

Nevertheless, the medical studies conducted on this matter have not produced enough evidence to support the claim carcinogenic effects of mobile phone technologies.

The only palpable effect of radiofrequency is heating. You might notice a slight thermal reaction of your skin and ears every time you speak on your cell phone for more than 20 minutes.

This harmless side effect has led some to believe that regular mobile phone use can lead to cancer through this thermal effect.

However, since radio waves emitted by smartphones are a form of non-ionizing radiation, similar to radio antennas, there is a minimal risk of causing adverse health consequences similar to those produced by x-rays or radon.

The amount of radiation absorbed by the human body from mobile phones and similar devices is measured through a unit called SAR or Specific Absorption Rate (check my article on Specific Absorption Rate).

The FCC urges manufacturers to refrain from producing cell phones that surpass 1.60 watts per kilogram of body weight, which is a SAR with a low risk of harmful radiation.

4. Wifi and Bluetooth Waves

WiFi is one of the most used technologies in everyday life. Whether you have a wireless router at home or you use the one at work, you are bound to connect to speedy wireless internet almost every day of the week.

Wireless routers are also present in cafes, restaurants, and libraries. Even public open spaces like parks, beaches and concert arenas employ this technology.

Studies have shown that wireless internet might produce harmful side effects for human health. Read my related article on wifi radiation in which I have highlighted all the risks associated with it as far as babies and children are concerned.

The fact that most people are exposed to this technology 24/7, and that there is no physical barrier to stop it makes wifi radiation potentially dangerous. However, this has been argued against.

The same goes for Bluetooth radio waves that also form a frequent mode of communication and device-pairing technology.

WiFi routers and Bluetooth terminals have a risk of producing harmful side effects on your health. Read my article which compares WiFi and bluetooth radiation effects.

Extensive exposure to wifi and bluetooth radiation waves emitted by these technologies can also cause mild headaches, sleep disruption and slight dizziness.

If you want to ensure higher protection for you and your family against potential dangers of WiFi and Bluetooth technologies, you can employ several practices in your everyday life. Some of them include disconnecting the WiFi router at night and spend more time outside in open, natural areas.

You can also get EMF protection products which are easily available on e-commerce sites.

5. TV Broadcasting Waves

Television radiation has been a constant part of our everyday lives since the 1950s. It is one of the oldest forms of human-made radiation, and the rise in deaths related to cancer and tumors has been widely related to it for many years now.

Although several clinical studies suggest that there is a minimal risk that TV radiation can produce terminal diseases, some people are still considering it a harmful presence in their homes.

The theory that you can absorb harmful radiation from your TV is generated by the possibility that old television sets may release X-ray waves. The cathode ray tube (CRT) technology had a minimal chance of producing X-rays.

This phenomenon happened when electrons traveling at high voltage would hit an obstacle in vacuum. Older generations were aware of this issue, which is why we people were often advised to keep a safe distance from their television sets.

Nowadays, CRT technology is rarely used and redundant. Even households have adapted to modern safety circuits and regulated power supplies that turn TV harmful radiation into an implausible myth.

Today’s TV sets and computer monitors use Liquid Crystal Displays (LCD) or Plasma displays, which are incapable of producing X-rays waves. Therefore, they do not represent a significant risk to your health.

Again, as it is the case with most devices that emit radio waves, intense exposure may lead to migraines, restlessness, and dizziness.

6. Microwaves

Microwaves are primarily used for cooking. Almost every household uses a microwave oven to heat or defrost food, and this device has become a common kitchen appliance all over the world since the 1970s.

When you heat your meal in a microwave oven, the water molecules absorb microwave radiation and generate a thermal increase that also kills any present bacteria.

Since the only form of energy transmitted to your food is heat, there is a minimal risk of contamination or radiation that can affect your health.

The only way that microwaves can hurt you is by exposing yourself to high levels, which may cause painful burns.

The parts of your body that are extremely sensitive to microwaves are your eyes and testes since their low blood circulation cannot disperse the excess heat quick enough to prevent injury.

Your best safety precaution against microwaves is to avoid using an oven that has a damaged door, and which does not ensure optimal enclosure.

Microwave radiation is also used in communication and satellite transmissions. Because they have a low frequency and long wavelength, they can penetrate clouds, smoke and rain easier than visible light waves.

This is one of the main reasons why microwaves are used to transmit signals into space orbit. Read my article on microwave radiation.

7. Infrared Waves

Infrared waves are set somewhere between visible light waves and microwaves.

Some of them are slightly visible in everyday life, such as the one emitting from your TV remote or the smoke detector, which are virtually harmless. This type of radiation is called “near infrared waves.”

Their counterparts, the “far infrared waves” are generally invisible to the human eye, and they give off more heat.

Infrared radiation is only harmful to the human body when it exceeds wavelengths longer than 750nm.

From this point on, they can produce severe damage to your eyes. Glass manufacturers and iron-welders are susceptible to cataracts in their eyes due to the thermal effect produced by intense infrared waves.

Again, the distance between you and the source of heavy infrared radiation is crucial. This is the reason why workers use high levels of protection, while bystanders are required to wear plastic goggles or simply look away.

Long-term exposure to infrared radiation can produce cellular degeneration and premature skin aging.

I have an article on low emf portable infrared saunas which you may like to check out.

8. Ultraviolet Rays

We receive a generous amount of ultraviolet (UV) light from the sun on a daily basis. It has a wavelength that is shorter than visible light, and it can penetrate soft tissue with ease.

If you expose yourself to strong ultraviolet radiation you risk sunburns, eye cataracts, cellular damage and even skin cancer.

We are protected against the sun’s UV rays by the ozone layer, but during summer heat waves we are exposed to intense radiation.

Ultraviolet radiation has been the subject of intense clinical research. Several studies have concluded that prolonged exposure to UV rays can increase the risk of developing skin melanoma and premature aging.

The first signs of too much UV radiation are wrinkles, dry skin, spots, moles, and freckles. The damage produced by ultraviolet rays builds up during an entire lifetime, and early symptoms of damaged epidermis may only be visible at a later date.

Mild UV rays are also generated by security marking devices and fluorescent lamps that are used to detect forged bank notes. Additionally, powerful telescopes use UV radiation to observe faraway stars and galaxies.

9. X-Ray Waves

In the electromagnetic spectrum, the UV rays are followed by shorter wavelength radiation like X-ray waves, which are also known as ionization radiation. This type of waves is dangerous for human health, especially when you are overly exposed to it.

X-rays can easily penetrate soft tissue in the human body, which is why they are used in medical procedures to read the condition of bones.

During this process, the level of radiation is kept at a minimum level to prevent cell degeneration and possible mutations. Clinical studies have revealed that prolonged exposure to X-ray radiation translates into a high risk of developing cancer.

Even mild exposure to X-rays may produce harmful effects on your health. For example, if you undergo X-ray treatments on a regular basis, you risk suffering from strong headaches, joint aches, skin damage and slow sperm motility.

Additional to medical purposes, X-rays are also used for observing the internal structure of objects that are difficult to see with an open eye. Airport security scanners also use them to ensure that their safety regulations are respected by passengers.

10. Gamma Rays

Gamma rays have an even shorter wavelength than X-rays and the last ones in the electromagnetic spectrum.

They use ionizing radiation to penetrate any type of material. They also create charged radicals to ease their traveling, which some consider being the cause for DNA mutations during cancer treatments that involve their use.

Their cell-penetrating power is the reason why gamma rays are sometimes used to kill cancer cells. Some clinical tests revealed that the destructive force produced by gamma radiation can alternatively repair genetic material.

Studies suggest that this healing effect is possible through long-term exposure to small doses of gamma rays than by subjecting a patient to one-time treatments with high doses.

Additionally, gamma rays are used to sterilize foods and medical equipment. Scientists have observed the formation of Gamma rays during powerful nuclear explosions and cosmically at the formation of supernovas.


Dark Matter Still Missing After Many Decades

The cover story of the November 16-22 New Scientist announced prominently on the cover:

“DARK MATTER: We still haven’t found it. Our theories are falling apart. Is it time to rethink the universe?” [1]

Dan Hooper, author of the cover story, is worried, because Dark Matter theory is a necessary support for the Big Bang. Thus, the Big Bang theory is also in trouble, as is the current theory about

how stars move within galaxies, and how galaxies move within galaxy clusters. Without it, we can’t explain how such large collections of matter came to exist, and certainly not how they hang together today. But what it is, we don’t know. Welcome to one of the biggest mysteries in the universe: what makes up most of it. Our best measurements indicate that some 85 per cent of all matter in our universe consists of “dark matter” made of something that isn’t atoms.[2]

This means that the visible universe, with all its complexity and diversity, amounts to only a fraction of what secular astronomers know. The rest is invisible, because it does not give off radiation that is part of the electromagnetic spectrum. Scientists have been trying to find dark matter since it was proposed 80 years ago. Scores of articles now echo the New Scientist cover story. An example is the headline that announced “New data tracking the movements of millions of Milky Way stars have effectively ruled out the presence of a ‘dark disk’ that could have offered important clues to the mystery of dark matter.”[3]

From axions to WIMPs (weakly interacting massive particles), “many candidates have been proposed as dark matter’s identity — and sought to no avail in dozens of experiments.”[4] Even as far back as 1992, Davis indicated the Dark Matter theory is in trouble.[5] One study that reanalyzed the data obtained as early as 1983 by the infrared astronomical satellite was felt by some astronomers to have delivered “a fatal blow” to the theory of cold dark matter.[6]

The newest “fatal blow” was the end result of the experiments from the XENON1T detector under the Gran Sasso mountain in Italy, the LUX detector in South Dakota and the Sichuan, China PandaX-II detector, each of which was “roughly 10,000 times as sensitive as the most sophisticated dark matter detectors operating in 2006.” None of them found any evidence of dark matter.[7] This multimillion dollar investment is what the New Scientist article referenced above referred to: Is it time to rethink the universe?

Dark Matter Critical to the Big Bang

Dark matter is a critical pillar of the Big Bang theory.[8] Thus, “dark matter’s no-show could mean a big bang rethink.”[9] Fritz Zwicky was the first to postulate dark matter in the 1930s from his observations of the large motions of galaxies in clusters. Observations to assemble a reasonable understanding of the universe by both radio and optical telescopes have suggested that there must exist large amounts of matter in the universe that do not give off electromagnetic radiation, including radio and light waves therefore, this matter cannot be detected directly from Earth by existing methods. The theory has historically concluded that between 85 and 99 percent of the matter in the universe consists of dark matter and dark energy. This range of estimates indicates our lack of ability to determine the amount of “missing matter.”

For matter in outer space to be observed, it must emit radiation that can be picked up by optical or radio telescopes. Matter that does not produce light or heat, i.e., is not energized by the means normally used to produce electromagnetic radiation, is detectable only by gravity-force calculations. Either dark matter does not produce any electromagnetic radiation (as does all known matter), or it does not exist. One goal of the Hubble Space Telescope was to search for clues related to the possible existence of this dark matter.

Graphic by David Coppedge

In the 1960s, Vera Rubin and others, measuring rotation rates of spiral galaxies, found that these systems of billions of stars were rotating at such a high rate that the outer regions should have spun off into intergalactic space eons ago. The conclusion from this research was that since there was not enough matter in the galaxy to hold it together by gravity, the evidence supported the Dark Matter theory. They envisioned large haloes of invisible matter surrounding the Milky Way and other galaxies to account for the observed rotation curves.

Dwarf galaxies are likewise estimated to possess large amounts of non-radiating mass in order to prevent being pulled apart by the force of their neighbors’ gravity. Additionally, some galaxies spin and orbit one another at speeds faster than the laws of physics, as currently understood, allow. Consequently, they thought, large amounts of invisible matter must exist to provide the extra gravity necessary to allow the observations, and theory, to work.

One theory hypothesizes that dark matter would create high density pockets and provide the seeds to begin the process of pulling this matter into clumps that later formed ordinary matter. This argument implies that dark matter is the same type of mass that makes up visible matter, but simply lacks energy.[10] Theoretically, dark matter cannot interact with regular matter except gravitationally. Yet we have no direct evidence of such matter, and its existence has been largely speculative since its inception.[11] Let’s look at some of the proposed candidate particles that might make up dark matter.

A common candidate is the neutrino, specifically mu and tau neutrinos. These are known particles that lack electric charge, but some research indicates they have a small amount of mass.[12] If the mass of heavier neutrinos is large, as is now hypothesized, “there is enough missing mass to satisfy everybody.”[13] Although much or even most of the matter of the universe may consist of neutrinos, they are difficult to detect because they interact poorly with other kinds of matter. Allegedly, these cosmic “greased pigs” mostly travel through the Earth without interacting with the matter in it. Astrophysicists don’t understand, however, how they could accumulate in pockets of the universe around the galaxies to produce the gravity level required to hold galaxies together. Also, if they travel at the speed of light, as is currently believed, they must be massless. Thus, many cosmologists dismiss neutrinos as dark matter candidates.

Another theory proposes hypothetical particles, such as axions or photinos that serve as creators of mass. Neither of these theories currently enjoy empirical support.[14] Faber hypothesizes that dark matter consists of “two new particles, one massive, one light,” and a computer analysis of their interactions indicates that they can produce “clumping on a grand scale.” She has also concluded that all other dark matter candidates, including neutrinos, are inadequate and would not produce the required clumping for the planets and stars to form.[15]

Yet another theory devised to explain the nature of missing mass proposes large amounts of ionized hydrogen widely distributed in the universe. Ultraviolet radiation from stars could provide the energy required to cause hydrogen ionization, but stars do not emit a sufficient level of energy to produce the ionized hydrogen level necessary to account for all of the missing mass. Another source of energy, the Lyman-alpha radiation emitted by galactic pulsars, is also insufficient to fully explain the phenomena observed.[16]

Others have hypothesized that dark matter spontaneously decays, emitting ultraviolet photons. More recent research, though, has questioned a number of the basic assumptions involved in this theorizing. Efforts to detect dark matter by evaluating its gravitational pull are now under way. Some argue that the structures discovered in the past decade are so massive that even the cold dark matter theory cannot adequately account for their formation.

The Gamma Ray Observatory and the Advanced X-ray Astrophysics Satellites, and the Space Infrared Telescope Facility launched about a year later, were both designed to study the entire electromagnetic spectrum. Gamma rays are of special interest because these photons possess energy levels millions or billions times greater than that of visible light photons, which have energies of only a few electron volts. The massive bursts were similar to those released during the explosion of atomic bombs, but they did not correspond to any then-known bomb pattern. These brief bursters (a phase that physically describes their action, and is now used as a noun to label them) last from a fraction of a second to about one hundred seconds. Research on gamma rays may also tell us much about neutron stars, black holes, and supernovas as well as shed light on contemporary theories of the universe’s origin.

Energetic sources emit gamma rays

Since their discovery in 1957, cosmic gamma rays have been difficult to study because those that reach the Earth tend to be weak and their number irregular. This is both because of the great distance the Earth is from their hypothetical source, and because the atmosphere tends to effectively block transmission of most shorter gamma rays. An advantage of this candidate is that gamma rays are uncharged. Consequently, they are not affected by the charged particles in space.

The major impediment to studying gamma rays is that our atmosphere shields the Earth from most gamma ray radiation. This filter, although extremely fortunate for life on Earth because gamma radiation is carcinogenic, precludes the study of gamma rays from the land surface. A gamma ray observatory satellite is for this reason a powerful tool to probe gamma radiation in outer space.[17]

The Promise of WIMPs Fails

The most viable theory for years was that Dark Matter consists of weak interacting massive particles (WIMPs). These hypothetical particles were supposed to be left over from the early Big Bang. Hypothesized as centers of gravity, they would be able to accumulate matter. Big-bang theorists predicted that WIMPs would affect the universe in such a way as to cause fewer, larger elliptical galaxies and galactic clusters to exist in the near future than exist today. Once again, though, the theoretical WIMPs have never been directly observed. Hooper wrote in their last obituary,

A decade or more ago, many physicists, including me, thought we knew what dark matter was likely to consist of: weakly interacting massive particles, or WIMPs … the longer we go without directly detecting WIMPs, the more we are forced to confront the uncomfortable possibility that they might not be there. And yet dark matter must exist – alternative explanations, such as modifying gravity to produce the same sort of effects, don’t seem to work … If not WIMPs, then what?[18]

In short, research thus far has not produced the evidence required to support the Dark Matter theory, and with it, the Big Bang. Saunders et al. even conclude that Dark Matter theory can now be ruled out at the 97 percent confidence level.[19] So “If not WIMPs, then what?”[20] Scientists have no answer still, but cannot give up. It’s essential to support their materialistic worldview. And so they will continue, with costly instruments paid for by taxpayers, to stare at nothing.

[3] Wolchover, Natalie. 2017. “Deathblow Dealt to Dark Matter Disks.” Quanta Magazine, November 17. https://www.quantamagazine.org/deathblow-dealt-to-dark-matter-disks-20171117/ emphasis added.

[5] Davis, M., et al. 1992. “The end of cold dark matter?” Nature, 356:489-494, April 9.

[6] Lemonick, Michael. 1993. The Light at the End of the Universe. New York, NY: Villard Books.

Bergman’s book about Darwin is a page-turner.

[8] The cold-dark-matter theory has been proposed to help explain some of the Big Bang cosmology difficulties and certain other cosmological incongruities.

[11] Davis, M., et al. 1992. “The end of cold dark matter?” Nature, 356:489-494, April 9.

[12] Gribbon, John. 1991.“Recreating the Birth of the Universe.” New Scientist, 131(1782):31-34, August 17.

[14] Maddox, John. 1990. “Making the Universe Hang Together.” Nature, 348:579, December 13.

[15] Faber, Sandra. 1990. Interview. OMNI, 23:62-64, 88-92, July, p. 64.

[16] Webb, John K. 1989.“A walk in the Lyman-alpha forest.” Nature, 338(6217):620-622, April 20.

[17] Begley, Sharon. 1990. John McCormick and Daniel Glick. “The heavens are holey.” Newsweek, pp. 60-61, April 30. [Who exactly is/are the author(s)?]

[19] Saunders, Will, et at. 1991. “The density field of the local Universe.” Nature, 349:32-38, January 3Lindley, David. 1991.“Cold dark matter makes an exit.” Nature, 349(6304):14, January 3 Davis, M., et al. 1992. “The end of cold dark matter?” Nature, 356:489-494, April 9.

Dr. Jerry Bergman has taught biology, genetics, chemistry, biochemistry, anthropology, geology, and microbiology at several colleges and universities including for over 40 years at Bowling Green State University, Medical College of Ohio where he was a research associate in experimental pathology, and The University of Toledo. He is a graduate of the Medical College of Ohio, Wayne State University in Detroit, the University of Toledo, and Bowling Green State University. He has over 1,300 publications in 12 languages and 40 books and monographs. His books and textbooks that include chapters that he authored, are in over 1,500 college libraries in 27 countries. So far over 80,000 copies of the 40 books and monographs that he has authored or co-authored are in print. For more articles by Dr Bergman, see his Author Profile.


Dark Energy in the Early Universe

Title: New Limits on Early Dark Energy From the South Pole Telescope
Authors: Christian L. Reichardt, Roland de Putter, Oliver Zahn, Zhen Hou
First Author’s Institution: University of California, Berkeley

The expansion of the universe is accelerating. The discovery of this fact was revolutionary in astronomy, and, it turns out, is the type of discovery that will win you the Nobel Prize in Physics. Prior to this result, most people believed the expansion of the universe must be slowing down, since gravity tries to pull matter back together. An accelerating universe means that there must be something pushing on space itself, causing it to expand faster and faster. The exact mechanism for this process remains unknown, and we parameterize our ignorance with the term dark energy.

The current favored cosmological model, known as the model, tells us that roughly 73% of the universe today is composed of this mysterious dark energy (check out this astrobite for a more detailed discussion of this cosmological model). The simplest form of dark energy is known as the cosmological constant and involves an energy whose density is constant in time. In this case, as one looks back in time, the relative importance of dark energy quickly becomes much smaller than that of matter and radiation this arises from the fact that the density of matter scales as and the density of radiation scales as , where is known as the scale factor and parameterizes the size of the universe . Thus, at early times in the universe’s history when the universe was much smaller , its density was dominated first by radiation and then by matter. Dark energy’s influence only becomes apparent at much later times, such as the present.

However, there are alternatives to the cosmological constant model in which the influence of dark energy in the early universe was not negligible in comparison to that of radiation and matter. These types of theories are known as early dark energy (EDE) models because they predict a strong dark energy effect at early times. The authors of this paper consider such a model, and constrain the density of EDE using the latest measurements of the temperature fluctuations of the cosmic microwave background (CMB) from the South Pole Telescope (SPT).

How can high-resolution measurements of the CMB constrain the existence of dark energy in the early universe? The amplitude of the CMB’s temperature fluctuations as a function of angular scale is sensitive to several important parameters, including the densities of matter, radiation, and dark energy (just recently, measurements of the CMB provided evidence for the existence of dark energy independent of any other measure of the universe’s expansion rate). The existence of EDE would imprint a strong signature on the CMB, and we can search for this signature in the CMB power spectrum. The power spectrum characterizes the size of temperature fluctuations as a function of multipole , where large (small) corresponds to small (large) scales on the sky. The multipole number is similar to the frequency of a wave, in that a larger multipole number corresponds to fluctuations of a smaller physical scale. For a great introduction to the CMB power spectrum, check out this tutorial by Wayne Hu. The CMB’s power spectrum as measured by SPT is shown in the figure below.

The addition of EDE increases the expansion rate in the early universe, which suppresses the growth of matter perturbations. This suppression in turn drives an increase in the amplitude of the temperature anisotropies, most strongly on small scales. Thus, the addition of EDE enhances the peaks in the CMB power spectrum at high . Until recently, the CMB power spectrum had been measured with small errors only at low by the WMAP satellite (shown on the plot by the open diamonds), but as the effects of EDE are strongest at high , the WMAP measurements are not sufficient for strong constraints on its existence. However, SPT has a higher spatial resolution than WMAP and is able to measure the small-scale CMB temperature anisotropies with much greater precision. The figure above shows the WMAP and SPT measurements of the CMB power spectrum as well as six different best-fit models with varying EDE density, denoted by , from 0% (black) to 5% (red). Each of the EDE models is consistent with the WMAP data at large scales, but they differ significantly from each other and from the data at the smaller scales to which SPT is sensitive.

Using the combination of WMAP and SPT measurements, the authors place a strong upper limit on the density of early dark energy. With a confidence level of 95%, the authors find an upper limit . If dark energy existed in the early universe, it did not account for more than 1.8% of the total density of the universe. This is roughly a factor of 3 improvement over the upper limit derived solely from the WMAP data, . The probability distribution for derived from the data is shown in the figure to the left. The authors also point out that in the next year, order of magnitude improvements in the measurements of the small-scale CMB power spectrum are expected from surveys like SPT, the Atacama Cosmology Telescope, and the Planck satellite. These improvements promise to further our understanding of the nature of dark energy and its importance in the early universe.


Elementary Astronomy (107)




Nobody knows what dark matter and dark energy are, or even if they are related. Dark matter is invisible and reacts with nothing except by pulling with its gravity. It holds the Universe together. Dark energy also is unseen and reacts with nothing except it pushes and is said to be causing the expansion of the Universe to speed up, pulling the Universe apart.

Recall Special and General Relativity

Einstein's equations did produce some effects which even he found unbelievable. They said the Universe would either expand indefinitely into the future, or fall back on itself into a giant black hole like a reverse Big Bang. He thought that the Universe should be stable and always the same size, since at the time Hubble had not yet found out about redshifts.

There is no way to have a stable universe unless something pushes out against gravity, so Einstein added that to his solution describing the universe. The strength of the push is determined by the cosmological constant , and he adjusted it to exactly counteract gravity and make the universe stable. After the expansion of the universe was discovered, Einstein called adding the constant his "biggest blunder", and he took it out. Without it, the Universe's fate was determined only by gravity. The expansion should slow down, but it might not stop altogether. For the next 70 years, astronomers thought that was right answer.

Type Ia Supernovae in Distant galaxies
Dark Matter versus Dark Energy in the Expansion of the Universe

The Big Rip!

And the universe will end in a Big Rip . This means when dark energy pressure accelerates inflation to nearly the speed of light, different parts will no longer be able to see each other, or feel any attractive forces. N o interaction can occur, neither gravitational nor electromagnetic nor weak nor strong, and all matter will be ripped apart.


First, the galaxies would be separated from each other. Arguably, this is what is happening right now , with galaxies that move outside the observable universe (approximately 46.5 billion light years away). About 60 million years before the end, gravity would be too weak to hold the Milky Way and other individual galaxies together. Approximately three months before the end, the solar system will be gravitationally unbound. In the last minutes, stars and planets will be torn apart, and an instant before the end, atoms will be destroyed.

The authors of this hypothesis, led by Robert Caldwell of Dartmouth College, calculate that the end of the universe as we now know it would be in approximately 50 billion years. (Credit: Wikipedia on the "Big Rip".)

A galaxy in the Big Rip

How Do We See "Dark" Matter?

but it gives the general idea accurately.

J ust as we were able to calculate the mass of a planet by the orbital revolution of its satellites pulled by its gravity . we can calculate the mass within a star's orbit around a galaxy. Wherever we look, however, the motion is faster than we expect for the matter we see in the galaxy, the more so, the farther we are from the center of the galaxy. This means in addition to the central black hole, stars, planets, gas, and dust within the orbit, there has to be something more. Vera Rubin, an astronomer who championed the role of women in science, was herself considered a potential candidate for the Nobel Prize for this discovery. The Large Synoptic Survey Telescope soon to be operating in Chile has been named in her honor.

Stars orbiting at the outer edges of galaxy M33 move faster than expected of a galaxy this apparent mass

The mass needed to create a gravitationa lenswith a cluster of galaxies is 10 times that of the observable matter that seen in the galaxies of the cluster. This huge gravity bends light from more remote galaxies, just as starlight is bent around the Sun. The invisible mass accounting for this gravity is "dark" matter associated with the cluster of galaxies, but it may have a distribution different from that of the galaxies in the cluster.

Gravitational lens caused by gravity from invisible dark matter in this cluster of galaxies.

Dark matter, in the company of galaxies, is distributed throughout the Universe in a filamentary pattern that looks like Swiss cheese when it is sliced. But what is it?

Dark matter is distributed through the universe.

What Could the Dark Matter Be ?

Nobody knows what dark matter is. Is it some new particle no one has ever seen? We know it cannot be dust, planets, faint stars, or even small black holes. The best candidate is called a WIMP for "weakly interacting massive particle." If they exist, we may find evidence for them in experiments with particle colliders here on Earth, and by looking for evidence of their decay and interaction in cosmic rays.

Could the evidence for dark matter be accounted for by a modification of Einstein's theory of gravitation as curved space? Does it have something to do with gravitons, the hypothetical particles that are agents of gravitation in a unified field theory? Is it explained by String Theory ?

String Theory predicts the existence of gravitons and the interactions of fundamental particles, but it is a very complex theory adding extra dimensions to the familiar three of space and one of time. A graviton in perturbative string theory is a closed string in a very particular low-energy vibrational state. We look to string theory as a theory of everything that unifies our description of the universe.

There are four known "forces" in nature. Newton created the concept of force to explain why matter would change the direction and speed of its motion. He explained gravity as the force that caused planets to move in curved paths, and he formulated a mathematical law that accounted for the observed motions. In a similar way, the others -- electricity and magnetism, the weak force, and the strong force -- provide ways to calculate how a particle moves through spacetime.

Einstein changed all this. By explaining gravity as a distortion of space and time, he made spacetime itself part of the "theory of everything".

All the other forces have known massless particles associated with them, called "force carriers." Massless particles travel at the speed of light. Electromagnetic forces have massless photons . The strong force holding quarks and atomic nuclei together has gluons . The weak force responsible for free neutrons decaying into protons and electrons has W and Z particles. A photon we know well because it is the quantum of ordinary light. Gluons and the W and Z are studied at particle accelerators where they appear in collisions of energetic particles. But, nobody has seen the elusive graviton that should be the force carrier for gravity and would appear in the extreme conditions when quantum gravity effects would be significant.

We expect these effects to occur when the universe is not described by General Relavity, in the smallest natural element of time. How small is that? This fundamental unit of time is found by combining G (constant of gravitation from Newton's Law of gravity), h (Planck's constant from quantum mechanics), and c (speed of light) to make

For this incredibly small time (the very , very, very early universe), and for distances light would travel in that time,



Maybe someday the theoretical graviton will hop over to the side with his known force carrier friends

DARK ENERGY


Dark energy permeates all of space and increases the rate of expansion of the universe. In the standard model of cosmology, dark energy accounts for about 73% of the total mass-energy of the universe. Dark energy is a sort of anti-gravity. Although we have a mathematical description of its action on the universe, we do not know exactly what it is. One way we may learn more is by studying the warping of spacetime near black holes.

The spectrum of the cosmic background radiation we observe is an exact match to the spectrum we expect from the gas in the early universe, redshifted because we are seeing it at such great distance. There is a slight pattern in this light, a structure that has the correct scale to have been the precursor of the structure we now see at much larger scale in the distribution of galaxies and dark matter.


The picture makes the differences seem very large, but they are actually very slight differences in temperature or redshift due to the variations in the density of the gas. These fluctuations in density were embedded in the mass-energy of the universe at the moment it began, yet they correlate farther across space than light could travel in the time since the beginning. How can that be?

The answer is that Universe "inflated" faster than the speed of light immediately after the Big Bang . The stretching of space faster than light speed is actually allowed by relativity. From the distant cosmic background radiation we know how big and how long this inflation was. From observations of supernovae that are closer to us, we know that it slowed to a steady expansion very early, that gravity has acted to slow it down more, but that now it is speeding up again.

The agent of inflation is termed Dark Energy . The Hubble Constant (H), the strength of gravity (G) and the observations of inflation in the cosmic background and through supernovae tell us that Dark Energy makes up a surprising 73% or so of the mass-energy of the entire universe.


Dead star emits never-before seen mix of radiation

This artist's impression provides a schematic of how the imager on-board ESA's Integral satellite (IBIS) can reconstruct images of powerful events like gamma-ray bursts (GRB) using the radiation that passes through the side of Integral’s imaging telescope. IBIS uses two detector layers, one on top of the other, while most gamma-ray telescopes contain just a single detector layer. In IBIS, the higher energy gamma rays trigger the first detector layer (called ISGRI), losing some energy in the process, but they are not completely absorbed. This is known as Compton scattering. The deflected gamma rays then pass through to the layer below (called PICSIT) where they can be captured and absorbed by the PICSIT crystals because they have given up some energy in their passage through the first layer. The blue-shaded part of the image describes the fully coded field of view of the instrument. IBIS can see around corners because gamma rays from the most powerful GRBs would pass through the lead shielding on the side of the telescope, then through the first detector layer before coming to rest in the second layer. The scatter locations in the two detecor layers and the energy deposits can then be used to determine the direction of the GRB. Credit: ESA/C.Carreau

A global collaboration of telescopes including ESA's Integral high-energy space observatory has detected a unique mix of radiation bursting from a dead star in our galaxy—something that has never been seen before in this type of star, and may solve a long-standing cosmic mystery.

The finding involves two kinds of interesting cosmic phenomena: magnetars and Fast Radio Bursts. Magnetars are stellar remnants with some of the most intense magnetic fields in the Universe. When they become 'active', they can produce short bursts of high-energy radiation that typically last for not even a second but are billions of times more luminous than the Sun.

Fast Radio Bursts are one of astronomy's major unsolved mysteries. First discovered in 2007, these events pulse brightly in radio waves for just a few milliseconds before fading away, and are only rarely seen again. Their true nature remains unknown, and no such burst has ever been observed either within the Milky Way, with a known origin, or emitting any other kind of radiation beyond the radio wave domain—until now.

In late April, SGR 1935+2154, a magnetar discovered six years ago in the constellation of Vulpecula, following a substantial burst of X-rays, became active again. Soon after, astronomers spied something astonishing: this magnetar was not only radiating its usual X-rays, but radio waves, too.

"We detected the magnetar's burst of high-energy, or 'hard', X-rays using Integral on 28 April," says Sandro Mereghetti of the National Institute for Astrophysics (INAF–IASF) in Milan, Italy, lead author of a new study of this source based on the Integral data.

"The 'Burst Alert System' on Integral automatically alerted observatories worldwide about the discovery in just seconds. This was hours before any other alerts were issued, enabling the scientific community to act fast and explore this source in more detail."

Astronomers on the ground spotted a short and extremely bright burst of radio waves from the direction of SGR 1935+2154 using the CHIME radio telescope in Canada on the same day, over the same timeframe as the X-ray emission. This was independently confirmed a few hours later by the Survey for Transient Astronomical Radio Emission 2 (STARE2) in the US.

"We've never seen a burst of radio waves, resembling a Fast Radio Burst, from a magnetar before," adds Sandro.

"Crucially, the IBIS imager on Integral allowed us to precisely pinpoint the origin of the burst, nailing its association with the magnetar," says co-author Volodymyr Savchenko from the Integral Science Data Centre at the University of Geneva, Switzerland.

Artist's impression of SGR 1935+2154, a highly magnetised stellar remnant, also known as a magnetar. Credit: ESA

"Most of the other satellites involved in the collaborative study of this event weren't able to measure its position in the sky—and this was crucial in identifying that the emission did indeed come from SGR1935+2154."

"This is the first ever observational connection between magnetars and Fast Radio Bursts," explains Sandro.

"It truly is a major discovery, and helps to bring the origin of these mysterious phenomena into focus."

This connection strongly supports the idea that Fast Radio Bursts emanate from magnetars, and demonstrates that bursts from these highly magnetized objects can also be spotted at radio wavelengths. Magnetars are increasingly popular with astronomers, as they are thought to play a key role in driving a number of different transient events in the Universe, from super-luminous supernova explosions to distant and energetic gamma-ray bursts.

Launched in 2002, Integral carries a suite of four instruments able to simultaneously observe and take images of cosmic objects in gamma rays, X-rays, and visible light.

At the time of the burst, the magnetar happened to be in the 30 degree by 30 degree field of view of the IBIS instrument, leading to an automatic detection by the satellite's Burst Alert System software package—operated by the the Integral Science Data Centre in Geneva—immediately alerting observatories worldwide. At the same time, the Spectrometer on Integral (SPI) also detected the of X-rays burst, along with another space mission, China's Insight Hard X-ray Modulation Telescope (HXMT).

"This kind of collaborative, multi-wavelength approach and resulting discovery highlights the importance of timely, large-scale coordination of scientific research efforts," adds ESA's Integral project scientist Erik Kuulkers.

"By bringing together observations from the high-energy part of the spectrum all the way to radio waves, from across the globe and in space, scientists have been able to elucidate a long-standing mystery in astronomy. We're thrilled that Integral played a key role in this."

The paper "INTEGRAL discovery of a burst with associated radio emission from the magnetar SGR 1935+2154" by S. Mereghetti et al. is published in the Astrophysical Journal Letters.


Dark matter emitting EM radiations beyond our currently known spectrum? - Astronomy

The spectra of real galaxies depend strongly on wavelength and also evolve with time. How might these facts alter the conclusion obtained in Sec. 2 namely, that the brightness of the night sky is overwhelmingly determined by the age of the Universe, with expansion playing only a minor role?

The significance of this question is best appreciated in the microwave portion of the electromagnetic spectrum (at wavelengths from about 1 mm to 10 cm) where we know from decades of radio astronomy that the "night sky" is brighter than its optical counterpart (Fig. 1). The majority of this microwave background radiation is thought to come, not from the redshifted light of distant galaxies, but from the fading glow of the big bang itself -- the "ashes and smoke" of creation in Lemaître's words. Since its nature and suspected origin are different from those of the EBL, this part of the spectrum has its own name, the cosmic microwave background (CMB). Here expansion is of paramount importance, since the source radiation in this case was emitted at more or less a single instant in cosmological history (so that the "lifetime of the sources" is negligible). Another way to see this is to take expansion out of the picture, as we did in Sec. 2.4: the CMB intensity we would observe in this "equivalent static model" would be that of the primordial fireball and would roast us alive.

While Olbers' paradox involves the EBL, not the CMB, this example is still instructive because it prompts us to consider whether similar (though less pronounced) effects could have been operative in the EBL as well. If, for instance, galaxies emitted most of their light in a relatively brief burst of star formation at very early times, this would be a galactic approximation to the picture just described, and could conceivably boost the importance of expansion relative to lifetime, at least in some wavebands. To check on this, we need a way to calculate EBL intensity as a function of wavelength. This is motivated by other considerations as well. Olbers' paradox has historically been concerned primarily with the optical waveband (from approximately 4000Å to 8000Å), and this is still what most people mean when they refer to the "brightness of the night sky." And from a practical standpoint, we would like to compare our theoretical predictions with observational data, and these are necessarily taken using detectors which are optimized for finite portions of the electromagnetic spectrum.

We therefore adapt the bolometric formalism of Sec. 2. Instead of total luminosity L, consider the energy emitted by a source per unit time between wavelengths and + d. Let us write this in the form dL F(, t) d where F(, t) is the spectral energy distribution (SED), with dimensions of energy per unit time per unit wavelength. Luminosity is recovered by integrating the SED over all wavelengths:

We then return to (11), the bolometric intensity of the spherical shell of galaxies depicted in Fig. 2. Replacing L(t) with dL in this equation gives the intensity of light emitted between and + d:

This light reaches us at the redshifted wavelength 0 = / (t). Redshift also stretches the wavelength interval by the same factor, d0 = d / (t). So the intensity of light observed by us between 0 and 0 + d0 is

The intensity of the shell per unit wavelength, as observed at wavelength 0, is then given simply by

where the factor 4 converts from an all-sky intensity to one measured per steradian. (This is merely a convention, but has become standard.) Integrating over all the spherical shells corresponding to times t0 and t0 - tf (as before) we obtain the spectral analog of our earlier bolometric result, Eq. (12):

This is the integrated light from many galaxies, which has been emitted at various wavelengths and redshifted by various amounts, but which is all in the waveband centered on 0 when it arrives at us. We refer to this as the spectral intensity of the EBL at 0. Eq. (61), or ones like it, have been considered from the theoretical side principally by McVittie and Wyatt [12], Whitrow and Yallop [13, 14] and Wesson [10, 15].

Eq. (61) can be converted from an integral over t to one over z by means of Eq. (14) as before. This gives

Eq. (62) is the spectral analog of (15). It may be checked using (57) that bolometric intensity is just the integral of spectral intensity over all observed wavelengths, Q = 0 I(0) d0. Eqs. (61) and (62) provide us with the means to constrain any kind of radiation source by means of its contributions to the background light, once its number density n(z) and energy spectrum F(, z) are known. In subsequent sections we will apply them to various species of dark (or not so dark) energy and matter.

In this section, we return to the question of lifetime and the EBL. The static analog of Eq. (61) (i.e. the equivalent spectral EBL intensity in a universe without expansion, but with the properties of the galaxies unchanged) is obtained exactly as in the bolometric case by setting (t) = 1 (Sec. 2.4):

Just as before, we may convert this to an integral over z if we choose. The latter parameter no longer represents physical redshift (since this has been eliminated by hypothesis), but is now merely an algebraic way of expressing the age of the galaxies. This is convenient because it puts (63) into a form which may be directly compared with its counterpart (62) in the expanding Universe:

If the same values are adopted for H0 and zf, and the same functional forms are used for n(z), F(, z) and (z), then Eqs. (62) and (64) allow us to compare model universes which are alike in every way, except that one is expanding while the other stands still.

Some simplification of these expressions is obtained as before in situations where the comoving source number density can be taken as constant, n(z) = n0. However, it is not possible to go farther and pull all the dimensional content out of these integrals, as was done in the bolometric case, until a specific form is postulated for the SED F(, z).

The simplest possible source spectrum is one in which all the energy is emitted at a single peak wavelength p at each redshift z, thus

SEDs of this form are well-suited to sources of electromagnetic radiation such as elementary particle decays, which are characterized by specific decay energies and may occur in the dark-matter halos surrounding galaxies. The -function SED is not a realistic approximation for the spectra of galaxies themselves, but we will apply it here in this context to lay the foundation for later sections.

The function Fp(z) is obtained in terms of the total source luminosity L(z) by normalizing over all wavelengths

so that Fp(z) = L(z) / p. In the case of galaxies, a logical choice for the characteristic wavelength p would be the peak wavelength of a blackbody of "typical" stellar temperature. Taking the Sun as typical (T = T = 5770K), this would be p = (2.90 mm K)/T = 5020Å from Wiens' law. Distant galaxies are seen chiefly during periods of intense starburst activity when many stars are much hotter than the Sun, suggesting a shift toward shorter wavelengths. On the other hand, most of the short-wavelength light produced in large starbursting galaxies (as much as 99% in the most massive cases) is absorbed within these galaxies by dust and re-radiated in the infrared and microwave regions ( 10, 000Å). It is also important to keep in mind that while distant starburst galaxies may be hotter and more luminous than local spirals and ellipticals, the latter contribute most to EBL intensity by virtue of their numbers at low redshift. The best that one can do with a single characteristic wavelength is to locate it somewhere within the B-band (3600 - 5500Å). For the purposes of this exercise we associate p with the nominal center of this band, p = 4400Å, corresponding to a blackbody temperature of 6590 K.

Substituting the SED (65) into the spectral intensity integral (62) leads to

where we have introduced a new shorthand for the comoving luminosity density of galaxies:

At redshift z = 0 this takes the value 0, as given by (20). Numerous studies have shown that the product of n(z) and L(z) is approximately conserved with redshift, even when the two quantities themselves appear to be evolving markedly. So it would be reasonable to take (z) = 0 = const. However, recent analyses have been able to benefit from observational work at deeper redshifts, and a consensus is emerging that (z) does rise slowly but steadily with z, peaking in the range 2 z 3, and falling away sharply thereafter [16]. This is consistent with a picture in which the first generation of massive galaxy formation occurred near z

3, being followed at lower redshifts by galaxies whose evolution proceeded more passively.

Fig. 9 shows the value of 0 from (20) at z = 0 [2] together with the extrapolation of (z) to five higher redshifts from an analysis of photometric galaxy redshifts in the Hubble Deep Field (HDF) [17]. We define a relative comoving luminosity density (z) by

and fit this to the data with a cubic [log(z) = z + z 2 + z 3 ]. The best least-squares fit is plotted as a solid line in Fig. 9 along with upper and lower limits (dashed lines). We refer to these cases in what follows as the "moderate," "strong" and "weak" galaxy evolution scenarios respectively.

Inserting (69) into (67) puts the latter into the form

The dimensional content of this integral has been concentrated into a prefactor I , defined by

This constant shares two important properties of its bolometric counterpart Q* (Sec. 2.2). First, it is independent of the uncertainty h0 in Hubble's constant. Second, it is low by everyday standards. It is, for example, far below the intensity of the zodiacal light, which is caused by the scattering of sunlight by dust in the plane of the solar system. This is important, since the value of I sets the scale of the integral (70) itself. Indeed, existing observational bounds on I (0) at 0 4400Å are of the same order as I . Toller, for example, set an upper limit of I (4400Å) < 4.5 × 10 -9 erg s -1 cm -2 Å -1 ster -1 using data from the Pioneer 10 photopolarimeter [18].

Dividing I of (71) by the photon energy E0 = hc / 0 (where hc = 1.986 × 10 -8 erg Å) puts the EBL intensity integral (70) into new units, sometimes referred to as continuum units (CUs):

where 1 CU 1 photon s -1 cm -2 Å -1 ster -1 . While both kinds of units (CUs and erg s -1 cm -2 Å -1 ster -1 ) are in common use for reporting spectral intensity at near-optical wavelengths, CUs appear most frequently. They are also preferable from a theoretical point of view, because they most faithfully reflect the energy content of a spectrum [19]. A third type of intensity unit, the S10 (loosely, the equivalent of one tenth-magnitude star per square degree) is also occasionally encountered but will be avoided in this review as it is wavelength-dependent and involves other subtleties which differ between workers.

If we let the redshift of formation zf then Eq. (70) reduces to

The comoving luminosity density (0 / p - 1) which appears here is fixed by the fit (69) to the HDF data in Fig. 9. The Hubble parameter is given by (33) as (0 / p -1) = [m,0(0 / p) 3 + , 0 - (m,0 + , 0 -1)(0 / p) 2 ] 1/2 for a universe containing dust-like matter and vacuum energy with density parameters m,0 and , 0 respectively.

Turning off the luminosity density evolution (so that = 1 = const.), one obtains three trivial special cases:

These are taken at 0 p, where (m,0, , 0) = (1, 0),(0, 1) and (0, 0) respectively for the three models cited (Table 1). The first of these is the "7/2-law" which often appears in the particle-physics literature as an approximation to the spectrum of EBL contributions from decaying particles. But the second (de Sitter) probably provides a better approximation, given current thinking regarding the values of m,0 and , 0.

To evaluate the spectral EBL intensity (70) and other quantities in a general situation, it will be helpful to define a suite of cosmological test models which span the widest range possible in the parameter space defined by m,0 and , 0. We list four such models in Table 2 and summarize the main rationale for each here (see Sec. 4 for more detailed discussion). The Einstein-de Sitter (EdS) model has long been favoured on grounds of simplicity, and still sometimes referred to as the "standard cold dark matter" or SCDM model. It has come under increasing pressure, however, as evidence mounts for levels of m,0 0.5, and most recently from observations of Type Ia supernovae (SNIa) which indicate that , 0 > m,0. The Open Cold Dark Matter (OCDM) model is more consistent with data on m,0 and holds appeal for those who have been reluctant to accept the possibility of a nonzero vacuum energy. It faces the considerable challenge, however, of explaining data on the spectrum of CMB fluctuations, which imply that m,0 + , 0 1. The +Cold Dark Matter (CDM) model has rapidly become the new standard in cosmology because it agrees best with both the SNIa and CMB observations. However, this model suffers from a "coincidence problem," in that m(t) and (t) evolve so differently with time that the probability of finding ourselves at a moment in cosmic history when they are even of the same order of magnitude appears unrealistically small. This is addressed to some extent in the last model, where we push m,0 and , 0 to their lowest and highest limits, respectively. In the case of m,0 these limits are set by big-bang nucleosynthesis, which requires a density of at least m,0 0.03 in baryons (hence the +Baryonic Dark Matter or BDM model). Upper limits on , 0 come from various arguments, such as the observed frequency of gravitational lenses and the requirement that the Universe began in a big-bang singularity. Within the context of isotropic and homogeneous cosmology, these four models cover the full range of what would be considered plausible by most workers.

Fig. 10 shows the solution of the full integral (70) for all four test models, superimposed on a plot of available experimental data at near-optical wavelengths (i.e. a close-up of Fig. 1). The short-wavelength cutoff in these plots is an artefact of the -function SED, but the behaviour of I (0) at wavelengths above p = 4400 Å is quite revealing, even in a model as simple as this one. In the EdS case (a), the rapid fall-off in intensity with 0 indicates that nearby (low-redshift) galaxies dominate. There is a secondary hump at 0 10, 000 Å, which is an "echo" of the peak in galaxy formation, redshifted into the near infrared. This hump becomes progressively larger relative to the optical peak at 4400 Å as the ratio of , 0 to m,0 grows. Eventually one has the situation in the de Sitter-like model (d), where the galaxy-formation peak entirely dominates the observed EBL signal, despite the fact that it comes from distant galaxies at z 3. This is because a large , 0-term (especially one which is large relative to m,0) inflates comoving volume at high redshifts. Since the comoving number density of galaxies is fixed by the fit to observational data on (z) (Fig. 9), the number of galaxies at these redshifts must go up, pushing up the infrared part of the spectrum. Although the -function spectrum is an unrealistic one, we will see that this trend persists in more sophisticated models, providing a clear link between observations of the EBL and the cosmological parameters m,0 and ,0.

Fig. 10 is plotted over a broad range of wavelengths from the near ultraviolet (NUV 2000-4000Å) to the near infrared (NIR 8000-40,000Å). The upper limits in this plot (solid symbols and heavy lines) come from analyses of OAO-2 satellite data (LW76 [20]), ground-based telescopes (SS78 [21], D79 [22], BK86 [23]), Pioneer 10 (T83 [18]), sounding rockets (J84 [24], T88 [25]), the shuttle-borne Hopkins UVX (M90 [26]) and -- in the near infrared -- the DIRBE instrument aboard the COBE satellite (H98 [27]). The past few years have also seen the first widely-accepted detections of the EBL (Fig. 10, open symbols). In the NIR these have come from continued analysis of DIRBE data in the K-band (22,000Å) and L-band (35,000Å WR00 [28]), as well as the J-band (12,500Å C01 [29]). Reported detections in the optical using a combination of Hubble Space Telescope (HST) and Las Campanas telescope observations (B02 [30]) are preliminary [31] but potentially very important.

Fig. 10 shows that EBL intensities based on the simple -function spectrum are in rough agreement with these data. Predicted intensities come in at or just below the optical limits in the low-, 0 cases (a) and (b), and remain consistent with the infrared limits even in the high-, 0 cases (c) and (d). Vacuum-dominated models with even higher ratios of , 0 to m,0 would, however, run afoul of DIRBE limits in the J-band.

The Gaussian distribution provides a useful generalization of the -function for modelling sources whose spectra, while essentially monochromatic, are broadened by some physical process. For example, photons emitted by the decay of elementary particles inside dark-matter halos would have their energies Doppler-broadened by circular velocities vc 220 km s -1 , giving rise to a spread of order () = (2vc / c) 0.0015 in the SED. In the context of galaxies, this extra degree of freedom provides a simple way to model the width of the bright part of the spectrum. If we take this to cover the B-band (3600-5500Å) then

1000Å. The Gaussian SED reads

where p is the wavelength at which the galaxy emits most of its light. We take p = 4400Å as before, and note that integration over 0 confirms that L(z) = 0 F(, z) d as required. Once again we can make the simplifying assumption that L(z) = L0 = const., or we can use the empirical fit (z) n(z) L(z) / 0 to the HDF data in Fig. 9. Taking the latter course and substituting (75) into (62), we obtain

The dimensional content of this integral has been pulled into a prefactor Ig = Ig(0), defined by

Here we have divided (76) by the photon energy E0 = hc / 0 to put the result into CUs, as before.

Results are shown in Fig. 11, where we have taken p = 4400Å, = 1000Å and zf = 6. Aside from the fact that the short-wavelength cutoff has disappeared, the situation is qualitatively similar to that obtained using a -function approximation. (This similarity becomes formally exact as approaches zero.) One sees, as before, that the expected EBL signal is brightest at optical wavelengths in an EdS Universe (a), but that the infrared hump due to the redshifted peak of galaxy formation begins to dominate for higher-, 0 models (b) and (c), becoming overwhelming in the de Sitter-like model (d). Overall, the best agreement between calculated and observed EBL levels occurs in the CDM model (c). The matter-dominated EdS (a) and OCDM (b) models contain too little light (requiring one to postulate an additional source of optical or near-optical background radiation besides that from galaxies), while the BDM model (d) comes uncomfortably close to containing too much light. This is an interesting situation, and one which motivates us to reconsider the problem with more realistic models for the galaxy SED.

The simplest nontrivial approach to a galaxy spectrum is to model it as a blackbody, and this was done by previous workers such as McVittie and Wyatt [12], Whitrow and Yallop [13, 14] and Wesson [15]. Let us suppose that the galaxy SED is a product of the Planck function and some wavelength-independent parameter C(z):

Here SB 2 5 k 4 / 15c 2 h 3 = 5.67 × 10 -5 erg cm -2 s -1 K -1 is the Stefan-Boltzmann constant. The function F is normally regarded as an increasing function of redshift (at least out to the redshift of galaxy formation). This can in principle be accommodated by allowing C(z) or T(z) to increase with z in (78). The former choice would correspond to a situation in which galaxy luminosity decreases with time while its spectrum remains unchanged, as might happen if stars were simply to die. The second choice corresponds to a situation in which galaxy luminosity decreases with time as its spectrum becomes redder, as may happen when its stellar population ages. The latter scenario is more realistic, and will be adopted here. The luminosity L(z) is found by integrating F(, z) over all wavelengths:

so that the unknown function C(z) must satisfy C(z) = L(z) / [T(z)] 4 . If we require that Stefan's law (L T 4 ) hold at each z, then

where T0 is the present "galaxy temperature" (i.e. the blackbody temperature corresponding to a peak wavelength in the B-band). Thus the evolution of galaxy luminosity in this model is just that which is required by Stefan's law for blackbodies whose temperatures evolve as T(z). This is reasonable, since galaxies are made up of stellar populations which cool and redden with time as hot massive stars die out.

Let us supplement this with the assumption of constant comoving number density, n(z) = n0 = const. This is sometimes referred to as the pure luminosity evolution or PLE scenario, and while there is some controversy on this point, PLE has been found by many workers to be roughly consistent with observed numbers of galaxies at faint magnitudes, especially if there is a significant vacuum energy density , 0 > 0. Proceeding on this assumption, the comoving galaxy luminosity density can be written

This expression can then be inverted for blackbody temperature T(z) as a function of redshift, since the form of (z) is fixed by Fig. 9:

We can check this by choosing T0 = 6600K (i.e. a present peak wavelength of 4400Å) and reading off values of (z) = (z) / 0 at the peaks of the curves marked "weak," "moderate" and "strong" evolution in Fig. 9. Putting these numbers into (82) yields blackbody temperatures (and corresponding peak wavelengths) of 10,000K (2900Å), 11,900K (2440Å) and 13,100K (2210Å) respectively at the galaxy-formation peak. These numbers are consistent with the idea that galaxies would have been dominated by hot UV-emitting stars at this early time.

Inserting the expressions (80) for C(z) and (82) for T(z) into the SED (78), and substituting the latter into the EBL integral (62), we obtain

The dimensional prefactor Ib = Ib(T0, 0) reads in this case

This integral is evaluated and plotted in Fig. 12, where we have set zf = 6 following recent observational hints of an epoch of "first light" at this redshift [32]. Overall EBL intensity is insensitive to this choice, provided that zf 3. Between zf = 3 and zf = 6, I (0) rises by less than 1% below 0 = 10,000Å and less than

5% at 0 = 20,000Å (where most of the signal originates at high redshifts). There is no further increase beyond zf > 6 at the three-figure level of precision.

Fig. 12 shows some qualitative differences from our earlier results obtained using -function and Gaussian SEDs. Most noticeably, the prominent "double-hump" structure is no longer apparent. The key evolutionary parameter is now blackbody temperature T(z) and this goes as [(z)], 1/4 so that individual features in the comoving luminosity density profile are suppressed. (A similar effect can be achieved with the Gaussian SED by choosing larger values of .) As before, however, the long-wavelength part of the spectrum climbs steadily up the right-hand side of the figure as one moves from the , 0 = 0 models (a) and (b) to the , 0-dominated models (c) and (d), whose light comes increasingly from more distant, redshifted galaxies.

Absolute EBL intensities in each of these four models are consistent with what we have seen already. This is not surprising, because changing the shape of the SED merely shifts light from one part of the spectrum to another. It cannot alter the total amount of light in the EBL, which is set by the comoving luminosity density (z) of sources once the background cosmology (and hence the source lifetime) has been chosen. As before, the best match between calculated EBL intensities and the observational detections is found for the , 0-dominated models (c) and (d). The fact that the EBL is now spread across a broader spectrum has pulled down its peak intensity slightly, so that the BDM model (d) no longer threatens to violate observational limits and in fact fits them rather nicely. The zero-, 0 models (a) and (b) again appear to require some additional source of background radiation (beyond that produced by galaxies) if they are to contain enough light to make up the levels of EBL intensity that have been reported.

The previous sections have shown that simple models of galaxy spectra, combined with data on the evolution of comoving luminosity density in the Universe, can produce levels of spectral EBL intensity in rough agreement with observational limits and reported detections, and even discriminate to a degree between different cosmological models. However, the results obtained up to this point are somewhat unsatisfactory in that they are sensitive to theoretical input parameters, such as p and T0, which are hard to connect with the properties of the actual galaxy population.

A more comprehensive approach would use observational data in conjunction with theoretical models of galaxy evolution to build up an ensemble of evolving galaxy SEDs F(, z) and comoving number densities n(z) which would depend not only on redshift but on galaxy type as well. Increasingly sophisticated work has been carried out along these lines over the years by Partridge and Peebles [33], Tinsley [34], Bruzual [35], Code and Welch [36], Yoshii and Takahara [37] and others. The last-named authors, for instance, divided galaxies into five morphological types (E/SO, Sab, Sbc, Scd and Sdm), with a different evolving SED for each type, and found that their collective EBL intensity at NIR wavelengths was about an order of magnitude below the levels suggested by observation.

Models of this kind, however, are complicated while at the same time containing uncertainties. This makes their use somewhat incompatible with our purpose here, which is primarily to obtain a first-order estimate of EBL intensity so that the importance of expansion can be properly ascertained. Also, observations have begun to show that the above morphological classifications are of limited value at redshifts z 1, where spirals and ellipticals are still in the process of forming [38]. As we have already seen, this is precisely where much of the EBL may originate, especially if luminosity density evolution is strong, or if there is a significant , 0-term.

What is needed, then, is a simple model which does not distinguish too finely between the spectra of galaxy types as they have traditionally been classified, but which can capture the essence of broad trends in luminosity density evolution over the full range of redshifts 0 z zf. For this purpose we will group together the traditional classes (spiral, elliptical, etc.) under the single heading of quiescent or normal galaxies. At higher redshifts (z 1), we will allow a second class of objects to play a role: the active or starburst galaxies. Whereas normal galaxies tend to be comprised of older, redder stellar populations, starburst galaxies are dominated by newly-forming stars whose energy output peaks in the ultraviolet (although much of this is absorbed by dust grains and subsequently reradiated in the infrared). One signature of the starburst type is thus a decrease in F() as a function of over NUV and optical wavelengths, while normal types show an increase [39]. Starburst galaxies also tend to be brighter, reaching bolometric luminosities as high as 10 12 - 10 13 L , versus 10 10 - 10 11 L for normal types.

There are two ways to obtain SEDs for these objects: by reconstruction from observational data, or as output from theoretical models of galaxy evolution. The former approach has had some success, but becomes increasingly difficult at short wavelengths, so that results have typically been restricted to 1000Å [39]. This represents a serious limitation if we want to integrate out to redshifts zf

6 (say), since it means that our results are only strictly reliable down to 0 = (1 + zf)

7000Å. In order to integrate out to zf

6 and still go down as far as the NUV (0

2000Å), we require SEDs which are good to

300Å in the galaxy rest-frame. For this purpose we will make use of theoretical galaxy-evolution models, which have advanced to the point where they cover the entire spectrum from the far ultraviolet to radio wavelengths. This broad range of wavelengths involves diverse physical processes such as star formation, chemical evolution, and (of special importance here) dust absorption of ultraviolet light and re-emission in the infrared. Typical normal and starburst galaxy SEDs based on such models are now available down to

100Å [40]. These functions, displayed in Fig. 13, will constitute our normal and starburst galaxy SEDs, Fn() and Fs().

Fig. 13 shows the expected increase in Fn() with at NUV wavelengths (

2000Å) for normal galaxies, as well as the corresponding decrease for starbursts. What is most striking about both templates, however, is their overall multi-peaked structure. These objects are far from pure blackbodies, and the primary reason for this is dust. This effectively removes light from the shortest-wavelength peaks (which are due mostly to star formation), and transfers it to the longer-wavelength ones. The dashed lines in Fig. 13 show what the SEDs would look like if this dust reprocessing were ignored. The main difference between normal and starburst types lies in the relative importance of this process. Normal galaxies emit as little as 30% of their bolometric intensity in the infrared, while the equivalent fraction for the largest starburst galaxies can reach 99%. Such variations can be incorporated by modifying input parameters such as star formation timescale and gas density, leading to spectra which are broadly similar in shape to those in Fig. 13 but differ in normalization and "tilt" toward longer wavelengths. The results have been successfully matched to a wide range of real galaxy spectra [40].

We proceed to calculate the spectral EBL intensity using Fn() and Fs(), with the characteristic luminosities of these two types found as usual by normalization, Fn() d = Ln and Fs() d = Ls. Let us assume that the comoving luminosity density of the Universe at any redshift z is a combination of normal and starburst components

where comoving number densities are

In other words, we will account for evolution in (z) solely in terms of a changing starburst fraction f (z), and a single comoving number density n(z) as before. Ln and Ls are awkward to work with for dimensional reasons, and we will find it more convenient to specify the SED instead by two dimensionless parameters, the local starburst fraction f0 and luminosity ratio 0:

Observations indicate that f0 0.05 in the local population [39], and the SEDs shown in Fig. 13 have been fitted to a range of normal and starburst galaxies with 40 0 890 [40]. We will allow these two parameters to vary in the ranges 0.01 f0 0.1 and 10 0 1000. This, in combination with our "strong" and "weak" limits on luminosity-density evolution, gives us the flexibility to obtain upper and lower bounds on EBL intensity.

The functions n(z) and f (z) can now be fixed by equating (z) as defined by (85) to the comoving luminosity-density curves inferred from HDF data (Fig. 9), and requiring that f 1 at peak luminosity (i.e. assuming that the galaxy population is entirely starburst-dominated at the redshift zp of peak luminosity). These conditions are not difficult to set up. One finds that modest number-density evolution is required in general, if f (z) is not to over- or under-shoot unity at zp. We follow [42] and parametrize this with the function n(z) = n0(1 + z) for z zp. Here is often termed the merger parameter since a value of > 0 would imply that the comoving number density of galaxies decreases with time.

Pulling these requirements together, one obtains a model with

Here (z) [1 / 0 + (1 - 1 / 0) f0] (z) and = ln[(zp)] / ln(1 + zp). The evolution of f (z), nn(z) and ns(z) is plotted in Fig. 14 for five models: a best-fit Model 0, corresponding to the moderate evolution curve in Fig. 9 with f0 = 0.05 and 0 = 20, and four other models chosen to produce the widest possible spread in EBL intensities across the optical band. Models 1 and 2 are the most starburst-dominated, with initial starburst fraction and luminosity ratio at their upper limits (f0 = 0.1 and 0 = 1000). Models 3 and 4 are the least starburst-dominated, with the same quantities at their lower limits (f0 = 0.01 and 0 = 10). Luminosity density evolution is set to "weak" in the odd-numbered Models 1 and 3, and "strong" in the even-numbered Models 2 and 4. (In principle one could identify four other "extreme" combinations, such as maximum f0 with minimum 0, but these will be intermediate to Models 1-4.) We find merger parameters between +0.4, 0.5 in the strong-evolution Models 2 and 4, and -0.5, - 0.4 in the weak-evolution Models 1 and 3, while = 0 for Model 0. These are well within the normal range [43].

The information contained in Fig. 14 can be summarized in words as follows: starburst galaxies formed near zf

4 and increased in comoving number density until zp

2.5 (the redshift of peak comoving luminosity density in Fig. 9). They then gave way to a steadily growing population of fainter normal galaxies which began to dominate between 1 z 2 (depending on the model) and now make up 90-99% of the total galaxy population at z = 0. This scenario is in good agreement with others that have been constructed to explain the observed faint blue excess in galaxy number counts [41].

We are now in a position to compute the total spectral EBL intensity by substituting the SEDs (Fn, Fs) and comoving number densities (86) into Eq. (62). Results can be written in the form I (0) = I n (0) + I s (0) where:

Here I n and I s represent contributions from normal and starburst galaxies respectively and (z) n(z) / n0 is the relative comoving number density. The dimensional content of both integrals has been pulled into a prefactor

This is independent of h0, as before, because the factor of h0 in 0 cancels out the one in H0. The quantity 0 appears here when we normalize the galaxy SEDs Fn() and Fs() to the observed comoving luminosity density of the Universe. To see this, note that Eq. (85) reads 0 = n0 Ln[1 + (0 - 1) f0] at z = 0. Since 0 n0 L0, it follows that Ln = L0 / [1 + (0 - 1) f0] and Ls = L0 0 / [1 + (0 - 1) f0]. Thus a factor of L0 can be divided out of the functions Fn and Fs and put directly into Eq. (89) as required.

The spectral intensity (89) is plotted in Fig. 15, where we have set zf = 6 as usual. (Results are insensitive to this choice, increasing by less than 5% as one moves from zf = 3 to zf = 6, with no further increase for zf 6 at three-figure precision.) These plots show that the most starburst-dominated models (1 and 2) produce the bluest EBL spectra, as might be expected. For these two models, EBL contributions from normal galaxies remain well below those from starbursts at all wavelengths, so that the bump in the observed spectrum at 0

4000Å is essentially an echo of the peak at

1100Å in the starburst SED (Fig. 13), redshifted by a factor (1 + zp) from the epoch zp 2.5 of maximum comoving luminosity density. By contrast, in the least starburst-dominated models (3 and 4), EBL contributions from normal galaxies catch up to and exceed those from starbursts at 0 10, 000Å, giving rise to the bump seen at 0

20, 000Å in these models. Absolute EBL intensities are highest in the strong-evolution models (2 and 4) and lowest in the weak-evolution models (1 and 3). We emphasize that the total amount of light in the EBL is determined by the choice of luminosity density profile (for a given cosmological model). The choice of SED merely shifts this light from one part of the spectrum to another. Within the context of the simple two-component model described above, and the constraints imposed on luminosity density by the HDF data (Sec. 3.2), the curves in Fig. 15 represent upper and lower limits on the spectral intensity of the EBL at near-optical wavelengths.

These curves are spread over a broader range of wavelengths than those obtained earlier using single-component Gaussian and blackbody spectra. This leads to a drop in overall intensity, as we can appreciate by noting that there now appears to be a significant gap between theory and observation in all but the most vacuum-dominated cosmology, BDM (d). This is so even for the models with the strongest luminosity density evolution (models 2 and 4). In the case of the EdS cosmology (a), this gap is nearly an order of magnitude, as reported by Yoshii and Takahara [37]. Similar conclusions have been reached more recently from an analysis of Subaru Deep Field data by Totani [44], who suggest that the shortfall could be made up by a very diffuse, previously undetected component of background radiation not associated with galaxies. Other workers have argued that existing galaxy populations are enough to explain the data if different assumptions are made about their SEDs [45], or if allowance is made for faint low surface brightness galaxies below the detection limit of existing surveys [46].

Having obtained quantitative estimates of the spectral EBL intensity which are in reasonable agreement with observation, we return to the question posed in Sec. 2.4: why precisely is the sky dark at night? By "dark" we now mean specifically dark at near-optical wavelengths. We can provide a quantitative answer to this question by using a spectral version of our previous bolometric argument. That is, we compute the EBL intensity I,stat in model universes which are equivalent to expanding ones in every way except expansion, and then take the ratio I / I,stat. If this is of order unity, then expansion plays a minor role and the darkness of the optical sky (like the bolometric one) must be attributed mainly to the fact that the Universe is too young to have filled up with light. If I / I,stat << 1, on the other hand, then we would have a situation qualitatively different from the bolometric one, and expansion would play a crucial role in the resolution to Olbers' paradox.

The spectral EBL intensity for the equivalent static model is obtained by putting the functions (z), f (z), Fn(), Fs() and (z) into (64) rather than (62). This results in I,stat(0) = I n ,stat(0) + I s ,stat(0) where normal and starburst contributions are given by

Despite a superficial resemblance to their counterparts (89) in the expanding Universe, these are vastly different expressions. Most importantly, the SEDs Fn(0) and Fs(0) no longer depend on z and have been pulled out of the integrals. The quantity I,stat(0) is effectively a weighted mean of the SEDs Fn(0) and Fs(0). The weighting factors (i.e. the integrals over z) are related to the age of the galaxies, 0 zf dz / (1 + z) (z), but modified by factors of nn(z) and ns(z) under the integral. This latter modification is important because it prevents the integrals from increasing without limit as zf becomes arbitrarily large, a problem that would otherwise introduce considerable uncertainty into any attempt to put bounds on the ratio I,stat / I [15]. A numerical check confirms that I,stat is nearly as insensitive to the value of zf as I , increasing by up to 8% as one moves from zf = 3 to zf = 6, but with no further increase for zf 6 at the three-figure level.

The ratio of I / I,stat is plotted over the waveband 2000-25,000Å in Fig. 16, where we have set zf = 6. (Results are insensitive to this choice, as we have mentioned above, and it may be noted that they are also independent of uncertainty in constants such as 0 since these are common to both I and I,stat.) Several features in this figure deserve notice. First, the average value of I / I,stat across the spectrum is about 0.6, consistent with bolometric expectations (Sec. 2). Second, the diagonal, bottom-left to top-right orientation arises largely because I (0) drops off at short wavelengths, while I,stat(0) does so at long ones. The reason why I (0) drops off at short wavelengths is that ultraviolet light reaches us only from the nearest galaxies anything from more distant ones is redshifted into the optical. The reason why I,stat(0) drops off at long wavelengths is because it is a weighted mixture of the galaxy SEDs, and drops off at exactly the same place that they do: 0

3 × 10 4 Å. In fact, the weighting is heavily tilted toward the dominant starburst component, so that the two sharp bends apparent in Fig. 16 are essentially (inverted) reflections of features in Fs(0) namely, the small bump at 0

4000Å and the shoulder at 0

Finally, the numbers: Fig. 16 shows that the ratio of I / I,stat is remarkably consistent across the B-band (4000-5000Å) in all four cosmological models, varying from a high of 0.46 ± 0.10 in the EdS model to a low of 0.39 ± 0.08 in the BDM model. These numbers should be compared with the bolometric result of Q / Qstat 0.6 ± 0.1 from Sec. 2. They tell us that expansion does play a greater role in determining B-band EBL intensity than it does across the spectrum as a whole -- but not by much. If its effects were removed, the night sky at optical wavelengths would be anywhere from two times brighter (in the EdS model) to three times brighter (in the BDM model). These results depend modestly on the makeup of the evolving galaxy population, and Fig. 16 shows that I / I,stat in every case is highest for the weak-evolution model 1, and lowest for the strong-evolution model 4. This is as we would expect, based on our discussion at the beginning of this section: models with the strongest evolution effectively "concentrate" their light production over the shortest possible interval in time, so that the importance of the lifetime factor drops relative to that of expansion. Our numerical results, however, prove that this effect cannot qualitatively alter the resolution of Olbers' paradox. Whether expansion reduces the background intensity by a factor of two or three, its order of magnitude is still set by the lifetime of the Universe.

There is one factor which we have not considered in this section, and that is the extinction of photons by intergalactic dust and neutral hydrogen, both of which are strongly absorbing at ultraviolet wavelengths. The effect of this would primarily be to remove ultraviolet light from high-redshift galaxies and transfer it into the infrared -- light that would otherwise be redshifted into the optical and contribute to the EBL. The latter's intensity I (0) would therefore drop, and one could expect reductions over the B-band in particular. The importance of this effect is difficult to assess because we have limited data on the character and distribution of dust beyond our own galaxy. We will find indications in Sec. 7, however, that the reduction could be significant at the shortest wavelengths considered here (0 2000Å) for the most extreme dust models. This would further widen the gap between observed and predicted EBL intensities noted at the end of Sec. 3.6.

Absorption plays far less of a role in the equivalent static models, where there is no redshift. (Ultraviolet light is still absorbed, but the effect does not carry over into the optical). Therefore, the ratio I / I,stat would be expected to drop in nearly direct proportion to the drop in I . In this sense Olbers had part of the solution after all -- not (as he thought) because intervening matter "blocks" the light from distant sources, but because it transfers it out of the optical. The importance of this effect, which would be somewhere below that of expansion, is a separate issue from the one we have concerned ourselves with in this section. We have shown that expansion reduces EBL intensity by a factor of between two and three, depending on background cosmology and the evolutionary properties of the galaxy population. Thus the optical sky, like the bolometric one, is dark at night primarily because it has not had enough time to fill up with light from distant galaxies. *****


Equations of Motion of Spin in Electromagnetic Field

In the next section the general problem of the spin precession in an external gravitational field will be reduced to the analogous problem for the case of an external electromagnetic field. The equations of motion for spin of a relativistic particle in electromagnetic field are not directly related to GR, and besides, they are well known.4 However, at least to make the presentation coherent, we will consider in this section just the problem referring to the electromagnetic The right-hand side of the equation for dSy dr should be linear and homogeneous both in the electromagnetic field strength Fyv, and in the same four-vector Sy, and may depend also on uy. In virtue of the first identity (7.20), the right-hand side should be four-dimensionally orthogonal to Sy. Therefore, the general structure of the equation we are looking for, is


Mysterious objects at the edge of the electromagnetic spectrum

From end to end, the newly discovered gamma-ray bubbles extend 50,000 light-years, or roughly half of the Milky Way's diameter, as shown in this illustration. Hints of the bubbles' edges were first observed in X-rays (blue) by ROSAT, a Germany-led mission operating in the 1990s. The gamma rays mapped by Fermi (magenta) extend much farther from the galaxy's plane. Credit: NASA's Goddard Space Flight Center

The human eye is crucial to astronomy. Without the ability to see, the luminous universe of stars, planets and galaxies would be closed to us, unknown forever. Nevertheless, astronomers cannot shake their fascination with the invisible.

Outside the realm of human vision is an entire electromagnetic spectrum of wonders. Each type of light--­from radio waves to gamma-rays--reveals something unique about the universe. Some wavelengths are best for studying black holes others reveal newborn stars and planets while others illuminate the earliest years of cosmic history.

NASA has many telescopes "working the wavelengths" up and down the electromagnetic spectrum. One of them, the Fermi Gamma-Ray Telescope orbiting Earth, has just crossed a new electromagnetic frontier.

"Fermi is picking up crazy-energetic photons," says Dave Thompson, an astrophysicist at NASA's Goddard Space Flight Center. "And it's detecting so many of them we've been able to produce the first all-sky map of the very high energy universe."

“This is what the sky looks like near the very edge of the electromagnetic spectrum, between 10 billion and 100 billion electron volts.”

The light we see with human eyes consists of photons with energies in the range 2 to 3 electron volts. The gamma-rays Fermi detects are billions of times more energetic, from 20 million to more than 300 billion electron volts. These gamma-ray photons are so energetic, they cannot be guided by the mirrors and lenses found in ordinary telescopes. Instead Fermi uses a sensor that is more like a Geiger counter than a telescope. If we could wear Fermi's gamma ray "glasses," we'd witness powerful bullets of energy – individual gamma rays – from cosmic phenomena such as supermassive black holes and hypernova explosions. The sky would be a frenzy of activity.

Before Fermi was launched in June 2008, there were only four known celestial sources of photons in this energy range. "In 3 years Fermi has found almost 500 more,” says Thompson.

A giant gamma-ray structure was discovered by processing Fermi all-sky data at energies from 1 to 10 billion electron volts, shown here. The dumbbell-shaped feature (center) emerges from the galactic center and extends 50 degrees north and south from the plane of the Milky Way, spanning the sky from the constellation Virgo to the constellation Grus. Credit: NASA/DOE/Fermi LAT/D. Finkbeiner et al.

What lies within this new realm?

"Mystery, for one thing," says Thompson. "About a third of the new sources can't be clearly linked to any of the known types of objects that produce gamma rays. We have no idea what they are."

The rest have one thing in common: prodigious energy.

"Among them are super massive black holes called blazars the seething remnants of supernova explosions and rapidly rotating neutron stars called pulsars.”

And some of the gamma rays seem to come from the 'Fermi bubbles' – giant structures emanating from the Milky Way's center and spanning some 20,000 light years above and below the galactic plane.

Exactly how these bubbles formed is another mystery.

Now that the first sky map is complete, Fermi is working on another, more sensitive and detailed survey.

"In the next few years, Fermi should reveal something new about all of these phenomena, what makes them tick, and why they generate such 'unearthly' levels of energy," says David Paneque, a leader in this work from the Max Planck Institute in Germany.

For now, though, there are more unknowns than knowns about "Fermi's world."