Astronomy

Could spy satellites use laser guide stars (for adaptive optics)?

Could spy satellites use laser guide stars (for adaptive optics)?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Are sodium lasers useful for Earth observing space telescopes/spy satellites?


The guide star laser is used to adapt the optics at a specific time and location, for specific atmospheric conditions. It is not, as your question implies, used to calibrate the optics for long-term use.

Therefor, if order to calibrate the satellite with a laser fired from Earth, the reconnaissance target would have to provide the laser. This is useless for non-cooperative targets, which I would imagine make up the bulk of the reconnaissance targets.

I suppose that one could fire a laser from the satellite itself, but this would have the obvious disadvantage of making it even more obvious when a spy satellite was overhead.

In any case, the guide star image does not penetrate the entire atmosphere, I believe that it produces an image just a few tens of kilometers up. This is useful because it is the thicker, denser and more turbulent lower air of which is the most concern to optics. Firing the laser from above would thus see the laser energy absorbed in the thinner, stiller, more predictable upper atmosphere, thus it would have must less advantage.

One final reason not to use a guide star laser on a reconnaissance satellite would be power budget. These satellites are as small as possible, with as small a heat and reflectivity index as possible. The added power source, be it RTG, conventional batteries, or solar, would likely increase the reflectivity of the satellite.


Optics to Outrace Them All

Editor’s note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites the original can be viewed at astrobites.org!

Title: Robo-AO Kepler Planetary Candidate Survey IV: The effect of nearby stars on 3857 planetary candidate systems
Author: Carl Ziegler, Nicholas Law, Christoph Baranec, et al.
First Author’s Institution: University of North Carolina at Chapel Hill
Status: Published in AJ

Introduction

In the 1970s and 80s, scientists working for the U.S. Department of Defense developed a secret technology for imaging Soviet spy satellites from the ground. One of the consultants, Claire Max , realized the technology could benefit astronomy and pushed for its declassification. The concept of adaptive optics with laser-powered guide stars was finally released to the public in 1991.

Adaptive optics (AO) involves the rapid deformation of a mirror to remove distortions from an image. Imagine seeing a rock at the bottom of a clear but fast-moving stream of water. Inverting the jittery, garbled image into a true image of the rock is a significant engineering and computational challenge. First, we need to observe a simple point source behind the same distorting medium.

This is where laser guide stars come in. One type of laser lights up a thin layer of sodium at about 90 km altitude. This creates a “guide star” that experiences much of the distortion seen in a true star. Using this information, a deformable mirror can correct the true star’s image in real-time.

But large AO systems are extremely unwieldy. They require lots of money and manpower, and lots of observing overhead — it can take several minutes to slew to a target, lock onto a guide star, and initialize a correction loop for deforming the mirror. And yet, in today’s era of “big data” astronomy, AO imaging has become ever more relevant for rapidly-growing datasets. There’s a niche for fast, robotic AO. Recently, the optical scientist Christoph Baranec has done just that by building… Robo-AO .

Figure 1: Left: An example image of Kepler’s view of a KOI. The large pixels hide an unknown number of stellar companions. Right: The same view, using Robo-AO. [Ziegler et al. 2017]

Today’s Paper

The Kepler space telescope has detected thousands of possible exoplanets transiting in front of their host stars. But the Kepler pixels are huge — about 4 arcseconds across — and there’s always a risk that multiple stars are lurking in a single pixel (Figure 1). This can lead to false positives (like in eclipsing binary systems ) or can dilute real planet signals (where the light from another star will make a transiting planet’s radius seem smaller than it actually is). This calls for AO follow-up.

Before Robo-AO, AO follow-up of Kepler objects of interest (KOIs) has been a heterogeneous and incomplete effort using different instruments, reduction pipelines, and analyses. To establish a consistent, complete, and unbiased analysis, Robo-AO has targeted Kepler stars from Palomar Mountain in California and Kitt Peak in Arizona. (Check out an introductory video about the project here , and a time-lapse of Robo-AO hammering targets with a UV laser here .) The most recent paper to be published on the accumulating Robo-AO Kepler data reports on 3857 KOI observations, taken at a rapid-fire cadence of 20 targets per hour.

Figure 2: A beauty parlor gallery of Robo-AO KOI images taken at Kitt Peak. [Ziegler et al. 2018]

Figure 3: Rate of stellar companions based on separation. Unassociated stars would be expected to increase roughly like a parabola (dashed line), but the observed number is greater, meaning that some of the stellar companions must be gravitationally bound and possibly influence exoplanet types which emerge in the system. [Ziegler et al. 2018]

Planets in Multistellar Systems?

Models suggest that stellar companions could severely perturb planets’ orbits or fling planets out of a system entirely . To determine the effect of stellar companions on planet formation, we first have to determine which of the KOI companions are gravitationally bound with each other, and how many are unbound but are just along close lines of sight.

The authors argue that if companions were unbound, then we would expect a field of view twice as large to lead to about four times as many found companions. But in fact, found companions increase linearly, suggesting that some of the stars are indeed bound (Fig. 3). The authors are preparing another paper that will take a deeper dive into the nature of these companions, which will bring us closer to understanding the effect of multiple stars on planetary systems.

In the meantime, Robo-AO is continuing to pursue many different science cases. And a southern Robo-AO unit at Cerro-Tololo is under development, as is a Robo-AO 2 instrument in Hawaii. Soon we can observe most of the sky with Robo-AO’s automated gaze. Unless, that is, the robots finally rise against us .


Instruments for Astronomy

Another partial solution is to make improvements to the observatories themselves. More ventilation traps can be added along the observatory walls so the air around the telescope is the same temperature as the outside air. Laminar air currents can also be blown across the surface of the mirror to minimize turbulence.

The Earth&rsquos atmosphere consists of layers of air at different temperatures that interact and cause large-scale movements of air masses (referred to as turbulence by scientists). For astronomers, turbulence is detrimental to their work as it disrupts the trajectory of light rays. This causes stars in the sky to twinkle and telescope images to be distorted.

The binary star Zeta Boötis as seen with (at left) and without (at right) the help of adaptive optics.A simple means to minimize the effects of this inconvenient phenomenon is to build astronomical observatories at high elevations so that the telescopes only peer through the upper levels of the terrestrial atmosphere. The images obtained in this manner will be up to ten times better than those obtained at sea level.

Another method, albeit much more radical, is to install telesco Read More

Another partial solution is to make improvements to the observatories themselves. More ventilation traps can be added along the observatory walls so the air around the telescope is the same temperature as the outside air. Laminar air currents can also be blown across the surface of the mirror to minimize turbulence.

The Earth&rsquos atmosphere consists of layers of air at different temperatures that interact and cause large-scale movements of air masses (referred to as turbulence by scientists). For astronomers, turbulence is detrimental to their work as it disrupts the trajectory of light rays. This causes stars in the sky to twinkle and telescope images to be distorted.

The binary star Zeta Boötis as seen with (at left) and without (at right) the help of adaptive optics.A simple means to minimize the effects of this inconvenient phenomenon is to build astronomical observatories at high elevations so that the telescopes only peer through the upper levels of the terrestrial atmosphere. The images obtained in this manner will be up to ten times better than those obtained at sea level.

Another method, albeit much more radical, is to install telescopes in space where the atmosphere no longer has any effect on astronomical observations. It is for this reason that today&rsquos astronomers are launching space telescopes &ndash like Hubble, FUSE and MOST &ndash into orbit.

The first research into resolving the problem of image distortion related to atmospheric turbulence was conducted in 1902 by the German physician Karl Strehl, who proposed a method of evaluating the image quality produced by optical systems.

In 1941, the Soviet mathematician Andrei Nikolaevich Kolmogorov made several breakthroughs into the study of turbulence. His work would later be integrated into atmospheric models used to correct distortions affecting astronomical images.

In 1953, the American astronomer Horace Welcome Babcock invented adaptive optics, a process that corrects image distortions caused by the terrestrial atmosphere. The technique consists of taking a sample of the light received by the telescope and calculating its degree of distortion. Deformable mirrors are then used to redirect the light rays and produce a corrected image. The proposal was promising, but Babcock was not able to construct his system for technical reasons.

In 1957, the American physician Robert B. Leighton of the California Institute of Technology succeeded in reducing atmosphere-induced image distortions using the 1.5-metre telescope at the Mount Wilson Observatory in California. His technique consisted of inclining the telescope&rsquos secondary mirror several times per second. In so doing, he managed to obtain the best images ever recorded of Jupiter and Saturn.

In that same year, the Soviet physicist Vladimir Pavlovich Linnik published an article in which he proposed that an artificial &ldquoguide star&rdquo could be created by pointing a laser at the sky and targeting its focal point in the upper atmosphere to agitate gas molecules.

If an adaptive optics system uses a natural &ldquoguide star&rdquo, it must be bright enough compared to the celestial object of interest to provide adequate light for the telescope&rsquos detector. The guide star and the study object must also appear sufficiently close in the sky. It is not always easy to find a star that fulfills these criteria, hence the idea of creating an artificial guide star. Linnik&rsquos proposal was revolutionary, but unfortunately remained unknown to the international scientific community until 1992 when his article was finally translated into English.

In 1970, the American engineer W. Thomas Cathey and his collaborators were the first to experimentally demonstrate an adaptive optics system that operated in real time.


Scientists See Better, Fainter with New Keck Laser Guide Star

Washington D. C. (January 10th, 2006) A new sodium laser is giving 50 times more sky coverage to the atmospheric-correcting technology known as adaptive optics on the Keck II telescope at Mauna Kea, Hawaii. The laser lets scientists explore most of the sky with adaptive optics and gives them the capability to study objects that were previously too faint to be seen with the system. Since 1999, Keck Adaptive Optics has provided 10 times more resolving power than what could otherwise be achieved from the ground. The results are producing infrared images from the ground comparable – and often better – than those taken from space.

“This has been the most exciting technological and scientific breakthrough for the Observatory in the last decade. It may forever change the way we do astronomy from the ground,” said W. M. Keck Observatory Director Fred Chaffee. “We are entering a new, extraordinary era of discovery.”

After just one year of regular scientific use, the Keck Laser Guide Star Adaptive Optics system is producing spectacular results and advancing research in several fields of astronomical study. Findings include the discovery of new asteroids, moons and planetoids in our solar system, the detection of new brown dwarf binary systems—including a strange new kind of binary, observations of physical processes taking place near a supermassive black hole, and new findings about extremely distant supernovae and young galaxies.

The technique of adaptive optics uses visible light from a bright star to measure and correct for atmospheric distortions at infrared wavelengths. But only about two percent of the sky has stars bright enough to use with adaptive optics. The Keck Laser Guide Star system overcomes this limitation by creating an artificial star anywhere in the sky. The W. M. Keck Observatory is the only 8 10 meter class facility in the world currently providing this capability to observers.

“The wish list for astronomers is pretty simple,” said Dr. David Le Mignant, adaptive optics scientist at the W. M. Keck Observatory. “First, they want the highest-quality images that can possibly be obtained. Second, they want to look anywhere they want to in the sky. The laser guide star makes both these wishes come true.”

Operating at nearly 1,000 times a second, the Keck adaptive optics system minimizes the blurring effects of Earth’s atmosphere to provide infrared images 10 times better than what can be achieved from the ground. Without any correcting technology, the best telescopes on Earth are limited to an average “seeing” ability, or resolving power, of about 0.5 arcseconds, the equivalent of being able to distinguish an object the size of a blueberry from 2.5 miles (4 km) away. But with adaptive optics, atmospheric blurring is removed, producing a resolving power of about 50 milliarcseconds or better. This improvement is like looking at a penny from 2.5 miles away and being able to read the words, “ONE CENT” and “Liberty” stamped on the coin.

“We are shattering a limitation for ground-based observations—astronomers can now uncover and study fine structures in extremely faint objects anywhere, within and beyond our galaxy, ” said Dr. Le Mignant. “This new data will particularly complement present deep sky surveys which study the formation of galaxies in the universe.”

More than 21 scientific results made possible with the Keck Laser Guide Star system are presented today at the 207th meeting of the AAS in Washington D.C. Among the many new significant findings discussed at Special Session 98, “Seeing the Universe in a New (Sodium) Light”:

* In the distant regions of our solar system, scientists at Caltech have used the Keck Laser Guide Star to discover three new satellites orbiting some of the largest objects in the Kuiper belt. The surprising properties of these moons suggest that they are formed very differently from the tiny moons known to orbit smaller Kuiper Belt Objects. (A. Bouchez, Caltech)
At the center of our own Milky Way galaxy, the hostile environment around the supermassive black hole should make it difficult for stars to form, but a group of massive young stars has been detected and their origins are puzzling scientists. The problem has been dubbed “the paradox of youth.” Now, UCLA scientists are able to measure how these young stars move across the sky with an unparalleled precision of only two kilometers per second, and determine, for the first time, the orbit of each of the young stars located more than a few light months from the black hole. Scientists are using the stars’ orbits, which retain an imprint of their origin, to understand how and where these young stars may have formed. (J. Lu, UCLA)

* Also in the Milky Way, scientists at the University of Hawaii are discovering new ultracool brown dwarf binary systems, including a strange new kind of binary never seen before. (M. Liu, UH-IfA)

* Scientists at UCSC and the Supernova Cosmology Project observed a supernova in a galaxy as it appeared when the universe was only 40 percent its current age (z=1.3). The Keck Laser Guide Star system allowed the team to study the faint system and resolve the supernovae from the galaxy core, separated by only 0.4 arcseconds. The discovery was made as part of a major, long-term project called “Center for Adaptive Optics Treasury Survey” or CATS, a project that is looking at deep Hubble galaxy fields with the Keck Laser Guide Star System. (J. Melbourne, Lick/UCSC)

“Major advances in astronomy are often the driven by having new technologies to explore the heavens,” said Michael Liu of the Institute for Astronomy at the University of Hawaii. “Through years of effort and dedication of many people, the Keck system is allowing us to see the whole of the universe in a new (sodium) light.”

More than 20 percent of all available observing nights through July 2006 on the Keck II telescope will use the sodium laser. Laser guide star systems do not outperform natural guide stars, but rather take over in the faint skies where sufficiently bright stars do not exist. With bright objects of magnitude 10 or greater, natural guide star systems still provide slightly better images, and will be used for about 30 percent of the adaptive optics research at W. M. Keck Observatory.

The Future
Regularly using sodium lasers with adaptive optics is in its early stages, but laser guide stars are being developed for most major observatories, most notably the European Southern Observatory’s Very Large Telescope, the Gemini North and Gemini South telescopes and the National Astronomical Observatory of Japan’s Subaru Telescope. Plans are also underway to install a new laser guide star system on the Keck I telescope within the next three years, and also to improve the efficiency and reliability of the existing laser system on Keck II.

Acknowledgements
The W. M. Keck Foundation provided major funding for the construction of the twin 10-meter Keck telescopes and for the adaptive optics and laser guide star systems. Additional funding for the Laser Guide Star Adaptive Optics system was provided by NASA, Lawrence Livermore National Laboratory (LLNL) and the Center for Adaptive Optics. The Laser Guide Star Adaptive Optics system was implemented by a team at W. M. Keck Observatory. The sodium laser was developed at LLNL. The W. M. Keck Observatory is managed by the California Association for Research in Astronomy, a non-profit 501 (c) (3) corporation whose board of directors includes representatives from Caltech, the University of California and NASA.


New laser to help clear the sky of space debris

Credit: Australian National University

Researchers at the Australian National University (ANU) have harnessed a technique that helps telescopes see objects in the night sky more clearly to fight against dangerous and costly space debris.

The researchers' work on adaptive optics—which removes the haziness caused by turbulence in the atmosphere—has been applied to a new 'guide star' laser for better identifying, tracking and safely moving space debris.

Space debris is a major threat to the $US700 billion of space infrastructure delivering vital services around the globe each day. With laser guide star adaptive optics, this infrastructure now has a new line of defense.

The optics that focus and direct the guide star laser have been developed by the ANU researchers with colleagues from Electro Optic Systems (EOS), RMIT University, Japan and the U.S. as part of the Space Environment Research Centre (SERC).

EOS will now commercialize the new guide star laser technology, which could also be incorporated in tool kits to enable high-bandwidth ground to space satellite communications.

The laser beams used for tracking space junk use infrared light and aren't visible. In contrast, the new guide star laser, which is mounted on a telescope, propagates a visible orange beam into the night sky to create an artificial star that can be used to accurately measure light distortion between Earth and space.

This guiding orange light enables adaptive optics to sharpen images of space debris. It can also guide a second, more powerful infra-red laser beam through the atmosphere to precisely track space debris or even safely move them out of orbit to avoid collisions with other debris and eventually burn up in the atmosphere.

Lead researcher, Professor Celine D'Orgeville from ANU, says adaptive optics is like "removing the twinkle from the stars."

"But that's a good thing," Professor D'Orgeville said.

"Without adaptive optics, a telescope sees an object in space like a blob of light. This is because our atmosphere distorts the light traveling between the Earth and those objects.

"But with adaptive optics, these objects become easier to see and their images become a lot sharper. Essentially, adaptive optics cuts through the distortion in our atmosphere, making sure we can clearly see the incredible images our powerful telescopes capture.

"This includes small, human-made objects—like weather and communication satellites, or space junk.

"That's why this development is such an important breakthrough when it comes to our efforts to clear our night skies of the ever-increasing clutter of space debris."

The EOS guide star laser and the ANU adaptive optics systems are located at the ANU Mount Stromlo Observatory in Canberra, Australia.

The ANU researchers will now work with EOS to test the new technology and apply it to a range of other applications including laser communications between the Earth and space.

It's an exciting development that will help to safeguard the wide range of vital applications of space technology in the 21st century.


Laser to zap space debris, funding for SKA

ANU team helps build a laser to move space debris PM announces funding boost for astronomy.

Australian National University researchers working with defence technology company EOS have developed lasers to blast space debris out of orbit.

Space debris (or “junk”) is becoming a serious problem as orbits get more congested with decommissioned space craft and other objects, and new satellites. Debris can smash into assets such as the International Space Station, and even a small object can cause great damage in space.

The ANU’s “guide star laser” will use adaptive optics to better spot, track, and move space debris.

Adaptive optics correct for haziness caused by atmospheric turbulence – the effect that makes stars twinkle. It “untwinkles” them.

Lead researcher, ANU professor Celine D’Orgeville, said “removing the twinkle from the stars” cuts through the atmospheric distortion so objects can be seen more clearly.

“This includes small, human-made objects – like weather and communications satellites, or space junk,” she said.

EOS group chief executive officer Ben Greene said EOS maintains a database of space objects and will now be able to actively manipulate them.

“Space debris is a major society threat, globally but especially in Australia due to our heavy economic dependence on space assets,” he said.

In delightful new for deep space exploration, the Federal Government has chipped in $387 million for supercomputing capabilities that will help astronomers study the beginning of the universe.

On Wednesday, Prime Minister Scott Morrison committed $300 million over a decade to the Square Kilometre Array Observatory in Western Australia, which will be the world’s largest radio telescope.

According to CSIRO, in its first phase SKA “will process data at a rate of about 157 Terabytes per second, which is enough to fill 27 million laptops with data every day and is about 5 times the estimated global internet traffic in 2015”.

The rest of the money will go to the Pawsey Supercomputing Centre, which is set to host the world’s first diamond quantum accelerator, and to fibre-optic connections and site readiness.

Australia will build and host the low-frequency part of the SKA as part of its collaboration with 15 other countries. Eventually up to a million antennae will help scientists model the first billion years of the universe, including the formation of the first stars and galaxies.

The Pawsey Supercomputing Centre will process that data, and is also available to organisations across Australia.

PM Morrison said the technology would help scientists “crack some of the biggest problems that are there”.

“The quantum computing capability is not just essential for solving deep scientific problems, but it’s absolutely essential for national security,” he said.

Tory Shepherd

Tory Shepherd is an Adelaide-based freelance journalist who has covered Space 2.0 for The Advertiser.

Read science facts, not fiction.

There’s never been a more important time to explain the facts, cherish evidence-based knowledge and to showcase the latest scientific, technological and engineering breakthroughs. Cosmos is published by The Royal Institution of Australia, a charity dedicated to connecting people with the world of science. Financial contributions, however big or small, help us provide access to trusted science information at a time when the world needs it most. Please support us by making a donation or purchasing a subscription today.

Make a donation

A Brief History of Laser Adaptive Optics

In this writeup I focus on two people’s contributions to the development of adaptive optics. I’ve left out many important names so that I could briefly highlight Dr. Max’s contributions in the context of the Weber Prize. An international cast of scientists and engineers have made modern AO what it is today.

JASON (no relation) is a small, elite, civilian group of scientists that advises the government on science and technology, mostly through classified studies for the DoD, DoE, and the various intelligence communities. Its members have developed many important concepts for these agencies, many of them now declassified.

One of the members, Claire Max, is known to many astronomers as the director of the University of California Observatories. There’s an amazing story about how astronomy changed forever that starts back in the 70’s that involves JASON, Dr. Max, Freeman Dyson, Ronald Reagan… well, let’s start at the beginning.

In the early 1960’s, the undisputed polymath and genius Freeman Dyson (yes, that Freeman Dyson) was a member of JASON, and worked on many projects for them. According to his commentary here, In 1972 Harold Lewis suggested that Mr. Dyson develop a theory for a control system that could help a telescope correct for the blurring effects of the Earth’s atmosphere. This would allow satellites to take clearer pictures of the ground, and telescopes to take clearer pictures of satellites. Mr. Dyson found the problem interesting, and, once the project was declassified, wrote a prescient 1975 paper on the subject, exploring the limits of the sharpness of images while looking through the atmosphere.

Mr. Dyson tried to persuade astronomers, first in the US, later in Europe and Russia, to pursue the technology, but he couldn’t arouse their interest (perhaps the problem was still too hard from a technical perspective? Or perhaps the prospect of exquisite images of only the very brightest stars was not sufficiently enticing). Mr. Dyson speculated that (true) rumors that the US military was developing the technology made “re-inventing the wheel” seem like a waste of time. Then Ronald Reagan announced the Strategic Defense Initiative (“Star Wars”), classifying much of the research on the area and confirming those rumors. Dyson writes that “this action set back progress in adaptive optics by ten years.”

Prof. Claire Max of UC Santa Cruz

Meanwhile, Claire Max, another JASON, was working on the problem of adaptive optics, and lamenting that it could only be used on bright targets with lots of photons. Typical astronomical sources were very faint, and so the limits Mr. Dyson had calculated for them were quite poor, meaning that astronomers could not necessarily use the technology. Working (as I have heard the story) with other JASON members, Dr. Max developed a technology to shoot an orange laser into the upper atmosphere that would excite the layer of sodium atoms there. These glowing atoms would shine back down, giving astronomers the photons they needed to calculate the distortions caused by the atmosphere. With this “laser guide star adaptive optics” technology, the entire sky could be made as clear and sharp as quantum physics allows, despite the atmosphere.

Yes, it really looks like that.

In 1991, Dr. Max finally succeeded in getting the technology declassified, and then started work at Lawrence Livermore National Laboratory outfitting the Shane 120-in telescope at Lick Observatory with a laser adaptive optics system, a program that succeeded in 1996. When I was a grad student at Berkeley, I often saw the Shane shooting its orange laser into the sky all night as I worked on the CAT.

Andrea Ghez, explorer of the black hole at the center of the Milky Way, user of LGS AO on Keck

Dr. Max would go on to edit the Keck “blue book” design study, an outline for how to build the most successful AO system possible for existing technology. This template was used for the successful Keck laser AO system that Dr. Max was intsrumental in bringing about (and which Andrea Ghez would use to weigh the supermassive black hole at the center of the Milky Way, essentially proving that black holes exist). It’s also been followed for the Gemini Planet Imager, now imaging planets orbiting other stars, and other next-generation AO systems.

In 1999, UC Santa Cruz became host to the National Science Foundation Center for Adaptive Optics, which developed the technology further and trained a new generation of astronomers in the theory, design, and use of AO. Dr. Max helped propose the center and served as its director. The CfAO kept the US competitive with Europe in AO design, and generated many of the cutting edge technologies used today in AO.

The whole $1 billion Thirty Meter Telescope concept relies on being able to construct giant space lasers. Seriously.

Today, laser guide star adaptive optics is an essential technology for every large telescope in the world. It is the basis for the case for the next generation of extremely large telescopes like the Giant Magellan Telescope, the European Extremely Large Telescope, and the Thirty Meter Telescope. It allows astronomers to obtain space-quality images of any part of the sky, using the principles laid down by Freeman Dyson in the mid ’70’s.

So it is quite overdue, but welcome nonetheless, that the AAS has awarded Claire Max the 2015 Joseph Weber award for Instrumentation. The citation is “for co-inventing sodium laser guide star adaptive optics and for shepherding adaptive optics from its roots in classified space surveillance to its prominence today as an essential technology on large telescopes. Her leadership has truly advanced the field of adaptive optics and transformed how we observe by making near diffraction-limited imaging possible on large telescopes, thus opening new fields of discovery including resolving stars and gas near supermassive black holes and studying extrasolar planets.”

I’m not an AO guy, I was just around at Berkeley/Lick when a lot of the laser AO stuff was happening. Bruce Macintosh, Marshall Perrin, and Norbert Hubin all contributed to my understanding of the history of LGS AO, but any errors in the above are my own. If you know more about this history or see a mistake, please leave a comment!


Tiny satellites could be 'guide stars' for huge next-generation telescopes

In the coming decades, massive segmented space telescopes may be launched to peer even closer in on far-out exoplanets and their atmospheres. To keep these mega-scopes stable, MIT researchers say that small satellites can follow along, and act as “guide stars,” by pointing a laser back at a telescope to calibrate the system, to produce better, more accurate images of distant worlds. Credit: Christine Daniloff, MIT

There are more than 3,900 confirmed planets beyond our solar system. Most of them have been detected because of their "transits"—instances when a planet crosses its star, momentarily blocking its light. These dips in starlight can tell astronomers a bit about a planet's size and its distance from its star.

But knowing more about the planet, including whether it harbors oxygen, water, and other signs of life, requires far more powerful tools. Ideally, these would be much bigger telescopes in space, with light-gathering mirrors as wide as those of the largest ground observatories. NASA engineers are now developing designs for such next-generation space telescopes, including "segmented" telescopes with multiple small mirrors that could be assembled or unfurled to form one very large telescope once launched into space.

NASA's upcoming James Webb Space Telescope is an example of a segmented primary mirror, with a diameter of 6.5 meters and 18 hexagonal segments. Next-generation space telescopes are expected to be as large as 15 meters, with over 100 mirror segments.

One challenge for segmented space telescopes is how to keep the mirror segments stable and pointing collectively toward an exoplanetary system. Such telescopes would be equipped with coronagraphs—instruments that are sensitive enough to discern between the light given off by a star and the considerably weaker light emitted by an orbiting planet. But the slightest shift in any of the telescope's parts could throw off a coronagraph's measurements and disrupt measurements of oxygen, water, or other planetary features.

Now MIT engineers propose that a second, shoebox-sized spacecraft equipped with a simple laser could fly at a distance from the large space telescope and act as a "guide star," providing a steady, bright light near the target system that the telescope could use as a reference point in space to keep itself stable.

In a paper published today in the Astronomical Journal, the researchers show that the design of such a laser guide star would be feasible with today's existing technology. The researchers say that using the laser light from the second spacecraft to stabilize the system relaxes the demand for precision in a large segmented telescope, saving time and money, and allowing for more flexible telescope designs.

"This paper suggests that in the future, we might be able to build a telescope that's a little floppier, a little less intrinsically stable, but could use a bright source as a reference to maintain its stability," says Ewan Douglas, a postdoc in MIT's Department of Aeronautics and Astronautics and a lead author on the paper.

The paper also includes Kerri Cahoy, associate professor of aeronautics and astronautics at MIT, along with graduate students James Clark and Weston Marlow at MIT, and Jared Males, Olivier Guyon, and Jennifer Lumbres from the University of Arizona.

For over a century, astronomers have been using actual stars as "guides" to stabilize ground-based telescopes.

"If imperfections in the telescope motor or gears were causing your telescope to track slightly faster or slower, you could watch your guide star on a crosshairs by eye, and slowly keep it centered while you took a long exposure," Douglas says.

In the 1990s, scientists started using lasers on the ground as artificial guide stars by exciting sodium in the upper atmosphere, pointing the lasers into the sky to create a point of light some 40 miles from the ground. Astronomers could then stabilize a telescope using this light source, which could be generated anywhere the astronomer wanted to point the telescope.

"Now we're extending that idea, but rather than pointing a laser from the ground into space, we're shining it from space, onto a telescope in space," Douglas says. Ground telescopes need guide stars to counter atmospheric effects, but space telescopes for exoplanet imaging have to counter minute changes in the system temperature and any disturbances due to motion.

The space-based laser guide star idea arose out of a project that was funded by NASA. The agency has been considering designs for large, segmented telescopes in space and tasked the researchers with finding ways of bringing down the cost of the massive observatories.

"The reason this is pertinent now is that NASA has to decide in the next couple years whether these large space telescopes will be our priority in the next few decades," Douglas says. "That decision-making is happening now, just like the decision-making for the Hubble Space Telescope happened in the 1960s, but it didn't launch until the 1990s.'"

Cahoy's lab has been developing laser communications for use in CubeSats, which are shoebox-sized satellites that can be built and launched into space at a fraction of the cost of conventional spacecraft.

For this new study, the researchers looked at whether a laser, integrated into a CubeSat or slightly larger SmallSat, could be used to maintain the stability of a large, segmented space telescope modeled after NASA's LUVOIR (for Large UV Optical Infrared Surveyor), a conceptual design that includes multiple mirrors that would be assembled in space.

Researchers have estimated that such a telescope would have to remain perfectly still, within 10 picometers—about a quarter the diameter of a hydrogen atom—in order for an onboard coronagraph to take accurate measurements of a planet's light, apart from its star.

"Any disturbance on the spacecraft, like a slight change in the angle of the sun, or a piece of electronics turning on and off and changing the amount of heat dissipated across the spacecraft, will cause slight expansion or contraction of the structure," Douglas says. "If you get disturbances bigger than around 10 picometers, you start seeing a change in the pattern of starlight inside the telescope, and the changes mean that you can't perfectly subtract the starlight to see the planet's reflected light."

The team came up with a general design for a laser guide star that would be far enough away from a telescope to be seen as a fixed star—about tens of thousands of miles away—and that would point back and send its light toward the telescope's mirrors, each of which would reflect the laser light toward an onboard camera. That camera would measure the phase of this reflected light over time. Any change of 10 picometers or more would signal a compromise to the telescope's stability that, onboard actuators could then quickly correct.

To see if such a laser guide star design would be feasible with today's laser technology, Douglas and Cahoy worked with colleagues at the University of Arizona to come up with different brightness sources, to figure out, for instance, how bright a laser would have to be to provide a certain amount of information about a telescope's position, or to provide stability using models of segment stability from large space telescopes. They then drew up a set of existing laser transmitters and calculated how stable, strong, and far away each laser would have to be from the telescope to act as a reliable guide star.

In general, they found laser guide star designs are feasible with existing technologies, and that the system could fit entirely within a SmallSat about the size of a cubic foot. Douglas says that a single guide star could conceivably follow a telescope's "gaze," traveling from one star to the next as the telescope switches its observation targets. However, this would require the smaller spacecraft to journey hundreds of thousands of miles paired with the telescope at a distance, as the telescope repositions itself to look at different stars.

Instead, Douglas says a small fleet of guide stars could be deployed, affordably, and spaced across the sky, to help stabilize a telescope as it surveys multiple exoplanetary systems. Cahoy points out that the recent success of NASA's MARCO CubeSats, which supported the Mars Insight lander as a communications relay, demonstrates that CubeSats with propulsion systems can work in interplanetary space, for longer durations and at large distances.

"Now we're analyzing existing propulsion systems and figuring out the optimal way to do this, and how many spacecraft we'd want leapfrogging each other in space," Douglas says. "Ultimately, we think this is a way to bring down the cost of these large, segmented space telescopes."


Wake Forest Professor's Math Gives Telescopes, New Laser Weapons "Hubble-Like" Vision

The Hubble Space Telescope isn¹t the only stargazer getting better eyes to view the universe.

A Wake Forest University professor's applied mathematics are part of new adaptive optics technology producing "Hubble-like" improvements in the sight of ground-based telescopes and new laser weapons.

Adaptive optics combines powerful lasers, high-speed computers, active mirrors that can rapidly alter their shape, and Robert J. Plemmons¹ problem-solving mathematical algorithms to reconstruct images distorted by Earth¹s atmosphere. By analyzing light returning from bright stars such as Vega or artificial stars created by shining a laser into the night sky, scientists can diminish the distorting effects of Earth¹s atmosphere.

The result: telescopes able to see 50 to 100 times more detail and laser-guided weapons better able to zap enemy missiles.

"Atmospheric effects are continuously changing so when you deblur an image, you have to do billions and billions of computations fast," said Plemmons, Z. Smith Reynolds Professor of Mathematics and Computer Science at Wake Forest. "When we look at a distant galaxy, the light from it travels, say, several million years to reach Earth but only gets blurred in the last few microseconds. That's the basic problem of atmospheric imaging."

No fewer than 10 telescopes are adding adaptive optics systems to improve their view, including what is now the highest-resolution telescope on Earth: the Air Force Phillips Laboratory¹s 3.5-meter, $27 million instrument at the Starfire Optical Range in New Mexico. Equipped with adaptive optics in January under a project supported by the Air Force Office of Scientific Research (AFOSR) and the National Science Foundation, the telescope can track softball-sized objects traveling 1,000 miles above the surface.

Plemmons¹ algorithms, developed in more than 25 years of research for the Defense Department, are also being used to overcome wind, hot air and other atmospheric turbulence that could affect the aim of the Air Force¹s $1.1-billion Airborne Laser Weapons System (ABL), designed to fire a laser through the nose of an aircraft to zap enemy missiles.

Astronomer Horace Babcock first proposed the idea of adaptive optics in 1953, but the first experiments did not begin until the 1970s. Only in the 1980s, with the Strategic Defense Initiative (SDI), or "Star Wars," did Plemmons and other adaptive optics researchers gain substantial funding. Ironically, the declassification of SDI work in 1991 has revolutionized ground-based astronomy.

"Whether you are trying to shine a laser on a target or get a sharp image of something in orbit, you have the same problems," said Maj. Scott Shreck, manager of the AFOSR's computational mathematics program.

Better eyes for the heavens also help the Air Force keep better tabs on spy satellites or protect space shuttle crews and satellites from orbiting space junk. "Some of this space junk will cause trouble when it comes down," Plemmons said. "Some U.S. and old Soviet satellites have nuclear power systems, so we want to know where they are."

Twinkling stars and other annoying effects of the Earth¹s atmosphere on light has confounded stargazers since the invention of the telescope. It was Christian Huygens, the inventor of the pendulum clock, who first noticed in the 17th century that heavenly bodies quivered in telescopic view through no fault of the telescope. Sir Isaac Newton observed in 1704: "The only remedy is a most serene and quiet air."

Iraqi Scuds and other missile threats have now made Newton¹s "remedy" a national defense priority. "We don't yet have a good ballistic missile defense system against Scud-type threats," said Plemmons, who testified before Congress last spring on the need for more basic science research to avoid the kind of mathematical errors that sent an Iraqi Scud into a U.S. Army barracks in Dhahran, Saudi Arabia , on Feb. 25, 1991, killing 28 Americans.

"It¹s not enough to just hit a target," Plemmons said. "The idea behind the ABL program is to image the nose cone of an incoming missile and fire the laser from the aircraft at the fuel supply behind the nose cone -- where it¹s most vulnerable -- and before the missile reaches its zenith, when it¹s still over enemy territory."

Author of more than 150 papers and five books on computational mathematics, Plemmons envisions the day when the math of adaptive optics will allow ground-based telescopes to possess the same imaging accuracy as the Hubble.

For now, he said both the Hubble and ground-based telescopes have roles in the exploration of the universe¹s mysteries. The larger mirrors of ground-based telescopes allow them to see the bigger picture of celestial bodies, whole planets and stars. The pristine vacuum of space allows the Hubble to better inspect individual areas and gather ultraviolet, X-ray and other light Earth¹s atmosphere blocks out.

"One doesn¹t exclude the other," Plemmons said. "We need both."

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.


From Cosmology to Biology

In 2014, Eric Betzig won a Nobel Prize for breaking the law. Specifically, Abbé's Law, which set a limit to how deeply optical microscopes could peer into their subjects. Betzig's workaround used fluorescent molecules to light up living cells from the inside. But, breaking one closely held assumption wasn't enough for Betzig. In his acceptance lecture to Stockholm University, Betzig told the audience that optical microscopy must free itself from petri dishes and cover slips. "That's not where cells evolved we need to look at them inside the whole organism," he said. That is, by peering into dense, living tissue. This was no idle call to action. Using technology borrowed from astronomy, his lab had already witnessed neurons firing in the brain of a live mouse.

That technology is called adaptive optics, and it's one of the marquee stories in the annals of tech transfer. First theorized in the 1950s, adaptive optics is, at its most basic, a two-part system: a sensor reads distortion on incoming light, and a deformable mirror changes shape to match that distortion, unscrambling the photons while reflecting them to the objective viewer. Simple as that sounds, even the most rudimentary models took decades to develop. Many of the essential technological innovations took place in classified military programs&mdashnotably, the oft-maligned "Star Wars" defense initiative. When declassified in the early 1990s, that work reinvigorated civilian astronomy.

In his book The Adaptive Optics Revolution, historian Robert Duffner calls adaptive optics, "the most revolutionary technological breakthrough in astronomy since Galileo pointed his telescope skyward to explore the heavens 400 years ago." And adaptive optics is still revolutionizing science, at both macro- and microscopic scale. Biologists use adaptive optics to see cellular interplay in live tissue vision scientists use it to map individual eyeball aberrations lithographers can now etch transistor circuitry inside deeply refractive crystal and, as recently as January of this year, astronomers used it to see what they believe is the birth of either a neutron star or black hole.

It all started with the twinkle of the stars. After being flung from burning globs of gas, stellar photons cross the universe unmolested. That is, until they reach Earth's atmosphere. The mix of warm and cool air up there creates a turbulent mess for light, which will bend its path to travel through relatively dense mediums.

In 1953, astronomer Horace Babcock came up with an idea to untwinkle the stars. In the journal Publications of the Astronomical Society of the Pacific, he described a "seeing compensator" that compared distorted stellar light against light generated by a bright "control star." His original concept called for bouncing the incoming light off an Eidophor&mdasha mirror covered in a thin layer of electrified oil. Inner gadgetry would produce a schlieren image, showing distortion in the incoming light waves. This schlieren image would then pass through a type of television tube called an orthicon, which would transmit the perceived distortion levels back to the Eidophor mirror. The electric impulses controlling the surface tension of the oily mirror would distort the Eidophor so it matched the incoming distortion. And, this would be a closed-loop feedback, constantly reading and morphing to match the distorted photons. Given how quickly atmospheric turbulence shifts, this last bit was essential. If it all worked perfectly, the astronomer would get a clear view of their desired celestial object.

Alas, contemporary technology couldn't meet Babcock's specs. It was another decade before anyone began designing rudimentary adaptive optics systems. But military and aerospace researchers, rather than astronomers, picked up the thread. Sputnik&mdashlaunched in 1953&mdashsparked a reconnaissance race between the US and USSR. As each nation covertly tried to keep tabs on its rival's armaments, both sides launched hundreds of satellites. "This was a time when the US was very interested to know what the Soviets were getting up to in space," says Robert Fugate, a retired senior scientist with the Air Force Research Laboratory in Kirtland, New Mexico, who spearheaded many of the military's later classified efforts with adaptive optics. By the mid-60s, some US defense thinkers thought Babcock's ideas might help them get a better look at those Soviet satellites. It also might be able to see other threats, like incoming missiles. Some even thought these techniques might someday translate into laser energy weapons capable of shooting those enemy missiles down.

The first military research took root in upstate New York, at the Rome Air Development Center. Funded by the Advanced Research Projects Agency&mdasha government-funded civilian agency that later became DARPA&mdashthese Air Force researchers teamed up with civilian corporation Itek to tackle the basic systems behind adaptive optics: the wavefront sensor, deformable mirror, and processor capable of relaying the constant signals between the two. The groups also gathered data on light passing through the atmosphere. Some of those experiments involved flying a B-57 with a 1,200 watt tungsten light bulb over the base. Unsurprisingly, civilians living in the area concocted all sorts of stories to explain those lights, even though the research group was forthcoming about the nature of their work. By the early 70s, Rome and Itek had a prototype system capable of measuring atmospheric distortion: the real-time atmospheric compensation (RTAC).

Building and testing the first AO system

Impressed with Itek's role, in 1973 DARPA awarded the company a contract to further the RTAC concept. Once again, the company partnered with Rome's Air Force scientists. Between 1976 and 1981, they developed the compensated imaging system and installed it on the 1.6 meter telescope atop Haleakala volcano in Hawaii. In March of 1982, they tested it for the first time. In The Adaptive Optics Revolution, Robert Duffner describes this maiden run:

"Scientists aimed the telescope at a star. The first image danced around and looked washed out and blurry. But when Don Hanson pushed the button to activate the adaptive optics on the telescope . a dramatic change occurred: the image became much brighter, clearer, and more detailed."

Exciting as it was, the experiments were classified, and the astronomers at that Maui telescope were sworn from telling their colleagues about the breakthrough. On top of that, the DOD didn't seem particularly impressed with the results, at least not enough to bring adaptive optics into production. See, adaptive optics systems need a lot of light to work. Some of it goes to the wavefront sensor, so the device can measure the distortion. Many stars, and most satellites, aren't bright enough on their own to provide the light needed for adaptive optics with enough left over for the telescope to see the image. Astronomers would work around this by imaging a second nearby star as a "guide." Satellites were trickier. The best method was imaging the satellite just after sunset, but while near-Earth orbit was still illuminated by the sun's light. The adaptive optics system would use the reflected sunlight to measure the atmospheric distortion.

"However, making reflected sunlight from the satellite itself the guide star for the system just is not practical," says Fugate. For one, this limited the time a telescope with adaptive optics had to collect satellite imagery to just a few minutes each day. Also, sometimes the reflected sunlight wasn't bright enough to correct the imaging. These drawbacks made the system too unreliable for counter-espionage purposes. But the DOD did not give up on adaptive optics. Not even close.

Using lasers as guide stars

In 2011, the European Southern Observatory tested the Wendelstein laser guide star while a thunderstorm approached.

"In the late spring of 1982, we were called to go brief the "Jasons" (a group of hard-nosed scientists who evaluated proposals for the DOD, cofounded by none other than Charles Townes) on using a laser to generate a guide star in the sky to make measurements of atmospheric distortion to be used in an adaptive optics system," says Fugate. The Jasons approved this proposal.

The 1980s were heady times for Fugate. He returned to work at Starfire Optical Range in New Mexico. While refining laser guide star concepts, he continued to develop core adaptive optics systems and even managed to finagle a combined $1 million from the Air Force and Strategic Defense Initiative to get the Starfire Optical Range its own 1.5 meter telescope. And then, on 13 February 1989, he completed the first successful test of adaptive optics combined with a laser guide star, correcting atmospheric distortions in real time.

This was another amazing breakthrough for adaptive optics. The laser guide star allowed corrections that provided unprecedented views of the stars&mdashand satellites. And, it was a breakthrough that mainstream astronomy had no idea about, thanks to national security.

However, the veil would soon lift. The Strategic Defense Initiative&mdashwhich had funded a lot of Fugate's work&mdashwas winding down. He and others set to work lobbying the military to declassify its work on adaptive optics and laser guide stars. Astronomers around the world were also working on adaptive optics and laser guide stars, but were wasting brain power trying to solve problems the US military had figured out years before.

By 1990, the Air Force decided declassifying wouldn't give America's enemies any real strategic advantage. It planned several avenues for releasing this information, the most important (and dramatic) of which was Fugate's presentation at the meeting of the American Astronomical Society in 1991 in Seattle. His presentation featured a slide of side-by-side pictures: On the left, a blurry fuzz. On the right, two tightly focused shining balls of light. The pictures showed the same star system, before and after adaptive optics with laser guide stars.

Fugate's presentation was a smash, and afterward he and his colleagues worked hard to share their work with other astronomers. His influence percolated into other disciplines as well.

Adaptive optics for vision science

Eyes are tempestuous little organs, and in the early 1990s ophthalmologist David Williams heard some colleagues from the University of Rochester's optics labs discussing a new technology that might help him peer through ocular distortion. "I cold-called Fugate out of the blue and told him what I wanted to do," says Williams. Fugate told Williams to come on down to Albuquerque&mdashbut told him to show up around midnight. "Astronomers are nocturnal creatures," says Williams. After the visit, Williams returned to Rochester, bought a deformable mirror, and hired a grad student who knew how to build wavefront sensors. They spent the next few years building adaptive optics systems for eye science.

During the 90s, civilian astronomers were pushing adaptive optics further in their own field. One of the most proactive was SPIE Fellow Claire Max, an astrophysicist and Jasons member who, at the time, was based at Lawrence Livermore National Laboratory. She had spearheaded efforts to build the first astronomical laser guide star at Lick Observatory, and wanted to continue innovating systems for larger aperture telescopes. However, she was having trouble finding funding to match the scale of her ambitions. That is, until she and her colleague Jerry Nelson at UC Santa Cruz attended a workshop given by the head of a NSF Science and Technology Center where they heard about 10-year NSF grants worth a few million each year. "This gives you time and money to do ambitious projects," says Max. "We knew these NSF centers liked doing something that involved more than one discipline," she says. So, Max and Nelson contacted Williams, and included his vision work in their proposal.

They won the grant in 1999, and established the Center for Adaptive Optics at UC Santa Cruz, where for a decade astronomers and vision scientists spent that grant money collaboratively advancing adaptive optics in their respective fields, while also operating as an educational hub for the technology. After the government money ran dry, they secured extra funding, and the center continues its mission today. The astronomers successfully built bigger and better adaptive optics systems, such that even the largest-aperture telescopes could benefit from the technology. Meanwhile, the vision research led to significant breakthroughs in anatomy and clinical care. They were able to get 3D views of the retina in high resolution, which helped them understand more about how eyes work, and also became an important diagnostic tool.

Adaptive optics played a role in surgery, too. Williams helped Bausch + Lomb develop technology to map a person's cornea for laser vision correction. Williams is currently using adaptive optics to develop a cure for blindness. "We can look into the eye and see on a cellular spatial scale whether our treatments are making a difference or not," says Williams.

From the edge of space to the limits of biology

Credit: Reproduced with permission from Liu et al., Science 360:6386 (2018).

Adaptive optics got its start helping astronomers see deep into space. Over the past two decades, it's also allowed microscopists to peer into the nuances of cellular biology. "My PhD research project 20 years ago involved some of the first work in the application of adaptive optics to microscopes," says Martin Booth, who at the time was working under Tony Wilson and Mark Neil at Oxford University. "The major part of this work was the first demonstration of adaptive aberration correction in confocal laser scanning microscopes, which are commonly used in biomedical imaging."

Since then, SPIE Senior Member Booth has established his own Oxford lab, where he continues to focus on developing the adaptive optics systems themselves. This has resulted in a blossoming array of applications and discoveries. For instance, traditional wavefront sensors don't always work at microscopic scale, so he has developed an image-based aberration measurement scheme. And, he believes this discovery could cross back over into telescopes to help maintain the active alignment of the mirrors used in the cameras of Earth-facing satellites.

Eric Betzig was introduced to adaptive optics in 2006 when he was a new hire at Janelia Research Campus. Though he'd been working on fluorescent schemes to light up single cells for superresolution (the same work that earned him a Nobel Prize in 2014), most of his new colleagues were focused on the brain. So, he got on board and hired his first postdoctoral researcher. Na Ji was part neuroscientist, part physicist, and was already using adaptive optics by the time she came to Betzig's lab.

However, as Booth had also learned, adaptive optics doesn't translate directly from astronomy to microscopy. "The challenge in astronomy is the rapid fluctuations," says Ji. "You have to make a feedback loop between the wavefront sensor and the deformable mirror thousands of times a second." In biology, the distortion doesn't fluctuate, it's just very dense. "I don't know if you've ever seen a brain, but they look like a blob of tofu," she says. Ji and Betzig came up with several highly technical alternative methods for peering through tissue. One involves a homebrewed wavefront sensor that works like an astronomical wavefront sensor in reverse. They also used fluorescent molecules inside the brain like internal guidestars and near-infrared light to penetrate into the flesh.

Betzig says he's close to retiring from microscopy. His final project is building a microscope he calls an "adaptive optics Swiss Army knife." This machine will pair every type of modern optical microscopy&mdashconfocal two photon, structural illumination, superresolution localization, expansion, lattice light sheet&mdashwith an optimized adaptive optics system. "It's still the early days of adaptive optics microscopy, and most people aren't aware of what it can do," he says. He predicts that within the next 10 to 20 years every commercial microscope will come standard with adaptive optics. "Just like with telescopes, it will make no sense to use one without it." .

-Nick Stockton is a freelance writer based in Pittsburgh, Pennsylvania. He contributes to WIRED and Popular Science.