Would They Tell Us If An Unstoppable Threat Was Approaching

Would They Tell Us If An Unstoppable Threat Was Approaching

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

This recent piece somewhat kindled my thoughts on this question, but I've pondered it for a while now:

With the new, more powerful scopes coming online in the coming years, would we be told if they discovered an approaching, world-ending threat? This hypothetical threat could be years, decades or even longer away.

What first made me even consider this was the situation with space shuttle Columbia where, I believe, the crew weren't told about the damaged tiles as there was nothing they could do about it anyway.

So, if the new scopes found a huge asteroid, or maybe even a star that was on course to do fatal damage to us in 10 years or something, and there was currently no way at all stop it, would they tell us?

Would the fear of mass panic stop them from telling us?

Thank you.

Scientists Discover Why Flies Are So Hard To Swat

Over the past two decades, Michael Dickinson has been interviewed by reporters hundreds of times about his research on the biomechanics of insect flight. One question from the press has always dogged him: Why are flies so hard to swat?

"Now I can finally answer," says Dickinson, the Esther M. and Abe M. Zarem Professor of Bioengineering at the California Institute of Technology (Caltech).

Using high-resolution, high-speed digital imaging of fruit flies (Drosophila melanogaster) faced with a looming swatter, Dickinson and graduate student Gwyneth Card have determined the secret to a fly's evasive maneuvering. Long before the fly leaps, its tiny brain calculates the location of the impending threat, comes up with an escape plan, and places its legs in an optimal position to hop out of the way in the opposite direction. All of this action takes place within about 100 milliseconds after the fly first spots the swatter.

"This illustrates how rapidly the fly's brain can process sensory information into an appropriate motor response," Dickinson says.

For example, the videos showed that if the descending swatter--actually, a 14-centimeter-diameter black disk, dropping at a 50-degree angle toward a fly standing at the center of a small platform--comes from in front of the fly, the fly moves its middle legs forward and leans back, then raises and extends its legs to push off backward. When the threat comes from the back, however, the fly (which has a nearly 360-degree field of view and can see behind itself) moves its middle legs a tiny bit backwards. With a threat from the side, the fly keeps its middle legs stationary, but leans its whole body in the opposite direction before it jumps.

"We also found that when the fly makes planning movements prior to take-off, it takes into account its body position at the time it first sees the threat," Dickinson says. "When it first notices an approaching threat, a fly's body might be in any sort of posture depending on what it was doing at the time, like grooming, feeding, walking, or courting. Our experiments showed that the fly somehow 'knows' whether it needs to make large or small postural changes to reach the correct preflight posture. This means that the fly must integrate visual information from its eyes, which tell it where the threat is approaching from, with mechanosensory information from its legs, which tells it how to move to reach the proper preflight pose."

The results offer new insight into the fly nervous system, and suggest that within the fly brain there is a map in which the position of the looming threat "is transformed into an appropriate pattern of leg and body motion prior to take off," Dickinson says. "This is a rather sophisticated sensory-to-motor transformation and the search is on to find the place in the brain where this happens," he says.

Dickinson's research also suggests an optimal method for actually swatting a fly. "It is best not to swat at the fly's starting position, but rather to aim a bit forward of that to anticipate where the fly is going to jump when it first sees your swatter," he says.

The paper, "Visually Mediated Motor Planning in the Escape Response of Drosophila," will be published August 28 in the journal Current Biology.

The research was funded by the National Institutes of Health and the National Science Foundation.

Story Source:

Materials provided by California Institute of Technology. Note: Content may be edited for style and length.

You've Heard of Shooting Stars, but This is Ridiculous

T he idea that stars live in galaxies has been astronomy’s conventional wisdom since the 1920’s. It took a serious hit recently, though, when observers concluded that as many as half the stars in the universe might actually hover outside galaxies, flung off into the intergalactic void as collateral casualties when smaller galaxies merge to become large ones.

But while that discovery was startling, a new prediction posted online takes the finding to a whole new level. A significant number of stars, say Avi Loeb and James Guillochon, of the Harvard-Smithsonian Center for Astrophysics, should not just be floating through intergalactic space: they should be screaming across the cosmos at absurdly high speeds. “We calculate that there should be more than a trillion stars in the observable universe moving at velocities of more than a tenth the speed of light,” says Loeb. That’s about 67 million m.p.h. (108 million k/h). And about ten million of those stars, he says, are moving at least five times faster than that.

High-velocity stars are not without precedent. Astronomers already knew of a handful of stars in the Milky Way that are moving at a million m.p.h (1.6 million k/h) or so, and which should eventually leave our galaxy. But this new class of speedsters&mdashif they’re confirmed&mdashwould make those so-called hypervelocity stars look like garbage trucks lumbering along in the cosmic slow lane.

There’s reason to hope that the findings are validated, beyond the mere wow factor of the work. Astronomers currently study the origin and development of the universe by trapping particles in telescopes and detectors&mdashphotons of light, mostly, and also, more recently, the ghostly particles called neutrinos, which bear information about the stars, galaxies and quasars in which they originated. Superfast stars would be another sort of “particles,” albeit huge, shining ones, which could tell astronomers plenty about the conditions they’ve encountered since they escaped their galactic homes. “They give us the opportunity to do cosmology in an entirely new way,” he says, “with massive objects moving at near lightspeed across the universe.”

How they got moving so fast is the core of Loeb’s and Guillochon’s idea. Most galaxies harbor huge black holes in their cores, and when two galaxies merge to form one, those black holes end up circling each other in a tight do-si-do. Eventually, they too will merge into a single object, but as they approach each other, the complex gravitational interplay between them and the stars that orbit them exerts incredible force&mdashand impart incredible speed. (Black hole interactions also give rise to hypervelocity stars within the Milky Way, but here there’s just a single black hole, and thus a lot less energy available.)

As the free-range, extra-galactic stars fly across the universe, their trajectories are bent by the gravity of galaxies they pass along the way. “It’s like a ball moving through a pinball machine,” says Loeb. “If we can reconstruct their trajectories back to their original host galaxies, we can test whether General Relativity [Einstein’s theory of gravity] acts the way we expect.”

You can do this only if you actually find the speeding stars&mdashbut Loeb is convinced it’s possible. “It’s challenging,” he admits, “but they could be detected with upcoming instruments.” Among them: the Large Synoptic Survey Telescope, the James Webb Space Telescope, the Thirty Meter Telescope and more, all of which are expected to come online by the end of this decade.

“The most exciting aspect,” Loeb continues, is that these need not be single stars.” They could be double stars, or even stars with planets. “If these planets are habitable,” says Loeb, “these would be the most exhilarating roller-coaster rides in the universe.”

While none of this can happen in our own galaxy at the moment, that’s just a temporary situation. The giant spiral galaxy M31, also known as Andromeda, is slowly approaching the Milky Way. In two billion years or so, they’ll smash together. When they do, the new, gigantic galaxy that emerges will at last be equipped with the twin black holes you need to build a cosmic slingshot.

Would They Tell Us If An Unstoppable Threat Was Approaching - Astronomy

Click the links to read “The Rise of Themistocles” part 1 and part 2

By 480 BCE, the Athenian general and statesman, Themistocles, had eliminated his political opponents and had been squarely planted as the most influential man in Athens. Rising through the political ranks of a young democratic city, Themistocles had taken great strides to prepare his country for war against the imminent Persian invasion. With coordination from the other prominent city-states of Greece, an alliance had been formed against the Persians, with the Athenian navy constituting the majority of the alliance’s naval power.

Leonidas at Thermopylae
by. Jacques-Louis David

Themistocles feared that Athens might fall to the invaders

“There the sons of Athens set

The stone that freedom stands on yet.”

“With numerous tribes from Asia’s region brought

The sons of Athens on these waters fought

Erecting, after they had quelled the Mede,

To Artemis this record of the deed.” -Plutarch, from “Themistocles”

the allies lost many lives at the battle of Artemisium

The bloody encounter took many lives. The allies especially lost numerous ships. It was then that word reach Themistocles that Leonidas and the soldiers of Thermopylae had fallen. The land pass to Greece was now available to the Persians. Themistocles and Eurybiades ordered a retreat. The Persians continued to advance, the Greeks needed a new strategy.

10 Predictions About the Future That Should Scare the Hell Out of You

Earlier this year, Oxford’s Global Priorities Project compiled a list of catastrophes that could kill off 10 percent or more of the human population. High on the list was a deliberately engineered pandemic, and the authors warned that it could happen in as few as five years.

Many of the technologies for this prospect are starting to appear, including the CRISPR/cas9 gene-editing system and 3D-bioprinters . What’s more, the blueprints for this kind of destruction are being made available. A decade ago, futurist Ray Kurzweil and technologist Bill Joy scolded the US Department of Health for publishing the full genome of the 1918 influenza virus, calling it “extremely foolish.” More recently, a number of scientists spoke out when Nature decided to publish a so-called “gain of function” study explaining how the bird flu could be mutated into something even deadlier .

The fear is that a rogue state, terrorist group, or a malign individual might create their own virus and unleash it. Natural selection is good at creating nasty and highly prolific viruses, but imagine what intentional design could concoct.

2. People who transfer their minds to computers are actually killing themselves

One of the more radical visions of the future is a world in which biological humans have traded-in their corporeal bodies in favor of a purely digital existence . This would require a person to literally upload their mind to a supercomputer, but this hypothetical process might actually result in the permanent destruction of the original person. It would be a form of unintentional suicide.

This is what’s known as the “continuity of consciousness” problem. Sure, we may eventually be able to cut, copy, and paste the essence of a person’s personality and memories to a digital substrate, but transferring the seat of consciousness itself may be an untenable proposition. Neuroscientists know that memories are parked in the brain as physical constructs there’s something physically there to copy. But consciousness still eludes our understanding, and we’re not certain how it arises in the brain, let alone how we can transfer it from point A to point B. It’s also quite possible that subjective awareness cannot be replicated in the digital realm, and that it’s dependent on the presence and orientation of specific physical structures.

You Might Never Upload Your Brain Into a Computer

Many futurists predict that one day we'll upload our minds into computers, where we'll romp around…

Mind uploading will likely require destructive atomic-scale scanning of the brain. It would be similar to the way teleportation is done in Star Trek. Indeed, one of the dirty little secrets of this sci-fi show is that the person being teleported is actually killed each time it happens, replaced by an exact duplicate who’s none the wiser. Mind transfers could be similar, where the original brain is destroyed, replaced by a digital being who’s convinced they’re still the original—but it would be a delusion.

3. Authoritarianism will make a comeback

As threats to national security increase, and as these threats expand in severity, governments will find it necessary to enact draconian measures. Over time, many of the freedoms and civil liberties we currently take for granted, such as the freedom of assembly, the right to privacy (more on this next—it’s worse than you think), or the right to travel both within and beyond the borders of our home country, could be drastically diminished.

At the same time, a fearful population will be more tempted and willing to elect a hardline government that promises to throw the hammer down on perceived threats—even overtly undemocratic regimes.

The threats to national security will have to be severe to instigate these changes, but history has precedents. Following the September 11 attacks and the subsequent mailings of anthrax spores, the US government enacted the Homeland Security Act. This legislation was criticized for being too severe and reactionary, but it’s a perfect example of what happens when a nation feels under threat. Now imagine what would happen if another 9/11-type event happened, but one involving hundreds of thousands of deaths, or even millions.

Such an act of terrorism could be unleashed through miniaturized nuclear weapons, or the deliberate release of bioweapons. And the fact that small groups, and even single individuals , will have the power to attain and use these weapons will only make governments and citizens more willing to accept the loss of freedoms.

4. Privacy will become a thing of the past

We are rapidly approaching the era of ubiquitous surveillance, a time when virtually every aspect of our lives will be monitored. Privacy as we know it will cease to exist, supplanted by Big Brother’s eyes and ears.

Governments, ever fearful of internal and external threats, will increasingly turn to low-cost, high-tech surveillance technologies. Corporations, eager to track the tendencies and behaviors of its users, will find it impossible to resist. Citizens of the surveillance society will have no choice but to accept that every last detail of their lives will be recorded.

Already today, surveillance cameras litter our environment, while our computers, smartphones, and tablet devices follow our daily affairs, whether it be our purchasing proclivities or the types of porn we watch.

Looking ahead, government agencies and police could deploy more sophisticated tracking devices, including the much-anticipated smart dust —tiny sensors that would monitor practically anything, from light and temperature to chemicals and vibrations. These particles could be sprinkled around Earth, functioning as the eyes and ears of the planet . In conjunction with powerful data mining algorithms, virtually everything we do would be monitored. To ensure accountability, we could watch the watchers —but will they allow it?

5. Robots will find it easy to manipulate us

Long before artificial intelligences become truly conscious or self-aware, they’ll be programmed by humans and corporations to seem that way. We’ll be tricked into thinking they have minds of their own, leaving us vulnerable to all manner of manipulation and persuasion. Such is the near future envisaged by futurist and sci-fi novelist David Brin . He refers to these insidious machine minds as HIERS, or Human-Interaction Empathetic Robots.

“Human empathy is both one of our paramount gifts and among our biggest weaknesses,” Brin told Gizmodo. “For at least a million years, we’ve developed skills at lie-detection. [but] no liars ever had the training that these new HIERS will get, learning via feedback from hundreds, then thousands, then millions of human exchanges around the world, adjusting their simulated voices and facial expressions and specific wordings, till the only folks able to resist will be sociopaths—and they have plenty of chinks in their armor, as well.”

12 Reasons Robots Will Always Have An Advantage Over Humans

We puny humans can be depressingly fragile and flawed, a realization that's all the more…

Brin figures that some experts will be able to tell when they’re being manipulated by one of these bots, but “that will matter about as much as it does today, as millions of voters cast their ballots based on emotional cues, defying their own clear self-interest or reason.” Eventually, robots may guide and protect their gullible human partners, advising them when “to ignore the guilt-tripping scowl, the pitiable smile, the endearingly winsome gaze, the sob story or eager sales pitch—and, inevitably, the claims of sapient pain at being persecuted or oppressed for being a robot.”

6. The effects of climate change will be irreversible

Late last year, world leaders forged an agreement to limit human-caused global warming to two degrees Celsius. It’s a laudable goal, but we may have already passed a critical tipping point. The effects of climate change are going to be felt for hundreds, and possibly thousands, of years to come. And as we enter into the planet’s Sixth Mass Extinction , we run the risk of damaging critical ecosystems and radically diminishing the diversity of life on Earth.

Climate models show that even if carbon dioxide levels came to a sudden halt, the levels of this greenhouse gas in Earth’s atmosphere will continue to warm our planet for hundreds of years. Our oceans will slowly release the CO2 it has been steadily absorbing, and our atmosphere may not return to pre-industrial levels for many centuries. As a recent assessment from the Intergovernmental Panel on Climate Change stated, “A large fraction of climate change is largely irreversible on human time scales.”

In The Bulletin, science writer Dawn Stover lists the ramifications :

The melting of snow and ice will expose darker patches of water and land that absorb more of the sun’s radiation, accelerating global warming and the retreat of ice sheets and glaciers. Scientists agree that the Western Antarctic Ice Sheet has already gone into an unstoppable decline. Currents that transport heat within the oceans will be disrupted. Ocean acidification will continue to rise, with unknown effects on marine life. Thawing permafrost and sea beds will release methane, a greenhouse gas. Droughts predicted to be the worst in 1,000 years will trigger vegetation changes and wildfires, releasing carbon. Species unable to adapt quickly to a changing climate will go extinct. Coastal communities will be submerged, creating a humanitarian crisis.

7. The antibiotic era will end

An increasing number of diseases are becoming resistant to antibiotics. Eventually, we could make the unhappy transition to a “post-antibiotic era,” a time when even the most routine infections could threaten our lives.

The era of antimicrobial resistant bacteria will change medicine as we know it. Transplant surgery will become difficult, if not impossible. Simple operations, such as a burst appendix, will be perilous once again. Pneumonia would ravage the elderly, as would many other diseases of old age, including cancer.

How bad could it get? A recent report by the Institute and Faculty of Actuaries in Britain predicted that the new era of antimicrobial resistance will kill upwards of 10 million people each year by 2050. No wonder they’re calling it the “antibiotic apocalypse.”

You Shouldn't Freak Out About the Antibiotic Apocalypse

A team of scientists has discovered a gene that renders bacteria resistant to colistin, a so-called

Thankfully, we’re not completely out of options. Scientists are currently on the hunt for undiscovered antibacterial compounds . They’re also working to develop bacteria-fighting viruses and vaccines. Failing that, we could alway design artificial microorganisms that can hunt down and destroy problematic bacteria.

8. Getting robots to kill humans will be disturbingly routine—and dangerous

It’s The Terminator scenario come to life—the unleashing of fully automated weapons systems that dispassionately hunt down and kill human combatants.

These systems, known as LAWS (Lethal Autonomous Weapons), are under development, and it’ll only be a matter of time before they’re tacked onto pre-existing weapons, including powerful munitions and nuclear warheads. These robotic weapons are supposed to reduce human casualties and make war more humane, but experts fear these futuristic killing machines could be prone to accidents and even escape human control.

Is War Becoming More Humane?

Devices like laser-guided bombs and nonlethal weapons have the potential to reduce civilian…

LAWS will be imbued with safety mechanisms and “moral” programming, but as Wendell Wallach from Yale University’s Interdisciplinary Center for Bioethics told to Gizmodo, they’ll be difficult to test, will still have software bugs, and will act unpredictably at times, even displaying unanticipated behavior.

“The speed-up of warfare and cost factors will make LAWS essential for advanced nations and attractive to non-state actors,” Wallach said. “While countries like the US promise that there will be meaningful human control and strong communication links to LAWS, they are particularly interested in LAWS for undersea weapons because they are difficult to communicate with.” As an example, Wallach worries about an unmanned submarine that mistakenly launches powerful munitions or even a nuclear warhead.

“We could have a nuclear conflagration before anyone even recognized what happened,” he said. “This is only one of hundreds of scenarios where semi-intelligent weaponry poses existential risks for humanity, long before the better recognized superintelligence might ever be realized. The long-term consequences of failing to ban LAWS far-outweigh any short-term benefits.”

9. We’ll lose all the satellites

Few people today are aware of the risks posed by the partial or total loss of our satellite fleet, a catastrophe that could be instigated by a Kessler Syndrome (as portrayed in the film Gravity), a massive geomagnetic solar storm, or through a space war.

Thread: Near-light-speed weaponry: what would it do, what would it look like?

I am interested in finding out as much as possible about what objects traveling at near-light speeds (.9c or greater) might look like from various angles and distances. I am putting together a short "space opera" sci-fi film, and although I will in part be going for retro, stylized visuals, I would like to try and inform these visuals with some real-life physics and ideas (e.g., even if I have funky ships that look like they're out of an old sci-fi serial, I would like to depict appropriate distances for outer space-- a "nearby" ship might be visible as just a speck of light, or only by the occlusion of stars). I'm no physicist, so please straighten out any crazy assumptions I may have picked up. I'm willing to learn!

From my understanding, even a small object traveling at near-light-speed would acquire tremendous kinetic energy upon impact with an object at rest. Hence my assumption that such objects would make pretty scary weapons. How much damage would a grain of sand do versus, say, a golf ball? What sizes and speeds would be more appropriate for destroying a ship, a space station or even a planet?

Would such an approaching object be easily detectable from great distances once launched? How about visible to the naked eye?

Let's say I'm floating in the void ("I" being the movie camera's field of view), and a "golf ball" traveling at .99c passes within a few kilometers from me, traveling straight "down" across my field of vision as I look straight ahead. What does it look like as it passes me as opposed to when it is approaching (looking up before it passes) and when it is receding (looking down after it passes)? Is it visible only for a split-second as a streak of light, or do I not notice it at all? How does this change if we increase my distance to the object by a few hundred kilometers, or a few thousand kilometers? Or bring me right next to it! Would blue-shifting and red-shifting make a noticeable difference to the eye, or is it subtle enough to only be detectable by equipment?

And of course, what would happen to a ship unfortunate enough to be in its path? Instant space dust, big visible burst of energy, or something else. I am trying to think visually about it. Also, would such a collision create a cloud of debris also traveling at comparable speeds?

OK, I could go on forever asking questions, but I'll leave it at this for now. Thanks for your time!

They would be invisible to the naked eye, and something the size of a golf ball would be practically invisible to any plausible sensors.

However, it's actually something that can be defended against. The incredible velocity of the object works against it if it hits even the thinnest of "shields". Such a shield could be a thin sheet of film, or an extremely thin field of smoke particles. Even the thinnest of practicable shields would instantly vaporize and ionize the golf ball, converting it into a shower of charged particles which can then be deflected by a modest magnetic field.

If you break it down into super slow motion, the result is essentially a spherical explosion centered just underneath the impact point. In the first moments, the golf ball splashes into the impact point affecting a depth of about a golf ball's diameter. This turns this impact point into ridiculously hot plasma radiating away awesome amounts of thermal x-rays. Maybe half of the energy is invisibly "vented" into outer space while the other half vaporizes its way through the immediate radius at hypersonic speeds.

After vaporizing on the order of a meter radius, the plasma fireball has cooled down enough to no longer be glowing in thermal x-rays. This is when things really start to change, because plasma isn't transparent to UV. This marks the end of the creation of the fireball. maybe a few meters wide hemisphere from the impact point. Unless you're using ridiculously super-slow-mo, everything up until this point is instantaneous.

From then on, the primary destruction mechanism is from the expansion of this hot gaseous fireball. The fireball itself mostly blasts out of the blast crater, but it also pushes outward against the crater's sides, sending that stuff outward at unstoppable supersonic speeds. If you're using slow-motion, it's an unstoppable expanding cloud of glowing molten debris and hot glowing smoke.

Information about American English usage here. Floating point issues? Please read this before posting.

How do things fly? This explains it all.

Actually they can't: "Heavier-than-air flying machines are impossible." - Lord Kelvin, president, Royal Society, 1895.

Thanks, lots of very useful stuff in your post.

IsaacKuo has already explained quite a lot, so I'll just try to show you how to calculate it.

Well all these questions can be answered by the Relativistic Kinetic Energy Formula (You'll find the details on the Wiki page). The Normal Kinetic Energy Formula is of course K.E. = (1/2) (mv^2) (where m is mass, and v is velocity), when you're talking about things travelling at near light speed you need to use a somewhat more complex formula.

That is K.E. = mc^2 (gamma - 1). c is, of course, the speed of light and gamma is a more complicated thing. gamma = 1/sqrt(1-(v/c)^2) (sqrt meaning "square root of", as I'm not sure how to type that).

Anyway, to take your examples: a grain of sand, let's say it's 1 mm in diameter and has a density of 3,000 kg/m^3 (which I think is about the density of most rocks), so a mass of 1.5707963267948966192313216916397e-6 kg (or 1.57 mg). If it's going at 95% c, then.
(BTW sorry if these are ridiculously big numbers, I'm just pasting in what Windows Calculator spits out).

gamma = 1/sqrt(1-(v/c)^2)
gamma = 1/sqrt(1-(285,000,000/300,000,000)^2)
gamma = 1/sqrt(1-(0.95)^2)
gamma = 1/sqrt(1-0.9025)
gamma = 1/sqrt(0.0975)
gamma = 1/0.31224989991991991029234465604699
gamma = 3.2025630761017426696650733953537

K.E. = mc^2 (gamma - 1)
K.E. = (1.5707963267948966192313216916397e-6)(300,000,000)^2 (3.2025630761017426696650733953537 - 1)
K.E. = (1.5707963267948966192313216916397e-6)(90,000,000,000,000,000) (2.2025630761017426696650733953537)
K.E. = (141,371,669,411.54069573081895224757) (2.2025630761017426696650733953537)
K.E. = 311,380,019,052.72171574899100542963 joules

If you want to convert this into everybody's favourite measure of explosions the Tons of TNT equivalent, then divide it by 4.184e+9 joules. So we get 74.42 Tons of TNT, the same as a very small nuclear warhead, but certainly very powerful.

Ok these calculations a very tedious, but luckily there are several online calculators for Relativistic Kinetic Energy. This one is probably the simplest to use Relativity Calculator (it also handily tells you the amount of time dilation experienced).

Let's see, if the sand grain were travelling at 99% the speed of light it would be even more powerful, with 8.608e+11 joules of energy, or about 206 tons of TNT.

If the same grain is going at 99.9% of the speed of light then it has 3.021e+12 joules of energy, or 722 tons of TNT. As you can see the energy raises exponentially as you approach light speed.

A Golf Ball? Well according to Wikipedia a Golf Ball weighs 45.93 grams. So.

At 95% c the Kinetic Energy is 9.105e+15 joules, or 2,176,147 tons of TNT, or about 2.2 Megatons (now we're talking!).
At 99% c the Kinetic Energy is 2.517e+16 joules, or 6,015,774 tons of TNT, or about 6 Megatons.
At 99.9% c the Kinetic Energy is 8.832e+16 joules, or 21,108,987 tons of TNT, or about 21.1 Megatons.

These kind of things are good enough to destroy spacecraft and stations (although good old nuclear bombs would probably be more practical), but if you want to destroy planets (and who doesn't? Muahhahaha. ), then we'll need something a bit bigger.

Read my post in this thread for an exposition of what relativistic kill Vehicles might do when hitting a planet. The atmosphere of a planet like Earth will matter a lot, small objects will burn up very quickly in the air and not reach the ground, no matter how fast they're going (check out The Earth Impact Effects Program, although it's not setup to deal with relativistic speeds). Although a great deal of energy is released in the burn up and that may be enough to cause huge atmospheric heating and cook the inhabitants of the planet (though I don't know how to calculate that kind of effect).

For planet killers, they will need to be big (so most of their structure survives the atmospheric friction and they actually hit), several tons at least, and a few hundred to a few thousand tons to be really sure. You want energies in the high Teraton range to wipe out a civilization. For instance, 100 tons traveling at 95% c produces 4.74 teratons of explosion, about enough to devastate half a continent. 1,000 tons will produce 47.37 teratons, this is getting there but, it's still not as power for instance as the asteroid that wiped out the Dinosaurs (which was apparently over 100 Teratons). Let's go for 10,000 tons, that's 473.71 teratons! Now that, I think, would wipe the planet clean, no chance for survival.

As IsaacKuo has already said, there wouldn't really be any effects from a relativistic object that can be detected, it would just look like a normal object travelling fast. One interesting (and very dangerous) thing about stuff travelling near light speed is that even if you do detect them they're coming so fast that they will arrive shortly after you notice them.

For example if say, a large relativistic kinetic Kill vehicle is travelling towards the Earth at 99% of the speed of light and we detect it by some method (say a highly sensitive space telescope), it will already arrive before we can do anything much. Suppose we detected it at the edge of our Solar system, 10 billion kilometres away, the light that we detect coming from it will take 33,333 seconds (approximately 9.26 hours) to reach us. But in that time the Kill vehicle will have travelled 9.9 billion km, and will be only 100 million km from Earth. It will arrive in just 337 seconds, about 6 minutes. So we have only 6 minutes to save the Earth! Not much you can do it that time.

Scientists discover why flies are so hard to swat

( -- Over the past two decades, Michael Dickinson has been interviewed by reporters hundreds of times about his research on the biomechanics of insect flight. One question from the press has always dogged him: Why are flies so hard to swat?

"Now I can finally answer," says Dickinson, the Esther M. and Abe M. Zarem Professor of Bioengineering at the California Institute of Technology (Caltech).

Using high-resolution, high-speed digital imaging of fruit flies (Drosophila melanogaster) faced with a looming swatter, Dickinson and graduate student Gwyneth Card have determined the secret to a fly's evasive maneuvering. Long before the fly leaps, its tiny brain calculates the location of the impending threat, comes up with an escape plan, and places its legs in an optimal position to hop out of the way in the opposite direction. All of this action takes place within about 100 milliseconds after the fly first spots the swatter.

"This illustrates how rapidly the fly's brain can process sensory information into an appropriate motor response," Dickinson says.

For example, the videos showed that if the descending swatter--actually, a 14-centimeter-diameter black disk, dropping at a 50-degree angle toward a fly standing at the center of a small platform--comes from in front of the fly, the fly moves its middle legs forward and leans back, then raises and extends its legs to push off backward. When the threat comes from the back, however, the fly (which has a nearly 360-degree field of view and can see behind itself) moves its middle legs a tiny bit backwards. With a threat from the side, the fly keeps its middle legs stationary, but leans its whole body in the opposite direction before it jumps.

"We also found that when the fly makes planning movements prior to take-off, it takes into account its body position at the time it first sees the threat," Dickinson says. "When it first notices an approaching threat, a fly's body might be in any sort of posture depending on what it was doing at the time, like grooming, feeding, walking, or courting. Our experiments showed that the fly somehow 'knows' whether it needs to make large or small postural changes to reach the correct preflight posture. This means that the fly must integrate visual information from its eyes, which tell it where the threat is approaching from, with mechanosensory information from its legs, which tells it how to move to reach the proper preflight pose."

The results offer new insight into the fly nervous system, and suggest that within the fly brain there is a map in which the position of the looming threat "is transformed into an appropriate pattern of leg and body motion prior to take off," Dickinson says. "This is a rather sophisticated sensory-to-motor transformation and the search is on to find the place in the brain where this happens," he says.

Dickinson's research also suggests an optimal method for actually swatting a fly. "It is best not to swat at the fly's starting position, but rather to aim a bit forward of that to anticipate where the fly is going to jump when it first sees your swatter," he says.

The paper, "Visually Mediated Motor Planning in the Escape Response of Drosophila," will be published August 28 in the journal Current Biology.

A model approach to climate change

The Earth is warming up, with potentially disastrous consequences. Computer climate models based on physics are our best hope of predicting and managing climate change, as Adam Scaife, Chris Folland and John Mitchell explain

It is official: the Earth is getting hotter, and it is down to us. This month scientists from over 60 nations on the Intergovernmental Panel on Climate Change (IPCC) released the first part of their latest report on global warming. In the report the panel concludes that it is very likely that most of the 0.5 °C increase in global temperature over the last 50 years is due to man-made emissions of greenhouse gases. And the science suggests that much greater changes are in store: by 2100 anthropogenic global warming could be comparable to the warming of about 6 °C since the last ice age.

The consequences of global warming could be catastrophic. As the Earth continues to heat up, the frequency of floods and droughts is likely to increase, water supplies and ecosystems will be placed under threat, agricultural practices will have to be changed and millions of people may be displaced as the sea level rises. The global economy could also be severely affected. The recent Stern Review, which was commissioned by the UK government to assess the economic impact of climate change, warns that 5–20% of the world’s gross domestic product could be lost unless large cuts in greenhouse-gas emissions are made soon. But how do we make predictions of climate change, and why should we trust them?

The climate is an enormously complex system, fuelled by solar energy and involving interactions between the atmosphere, land and oceans. Our best hope of understanding how the climate changes over time and how we may be affecting it lies in computer climate models developed over the past 50 years. Climate models are probably the most complex in all of science and have already proved their worth with startling success in simulating the past climate of the Earth. Although very much a multidisciplinary field, climate modelling is rooted in the physics of fluid mechanics and thermodynamics, and physicists worldwide are collaborating to improve these models by better representing physical processes in the climate system.

Not a new idea

Long before fears of climate change arose, scientists were aware that naturally occurring gases in the atmosphere warm the Earth by trapping the infrared radiation that it emits. Indeed, without this natural “greenhouse effect” – which keeps the Earth about 30 °C warmer than it would otherwise be – life may never have evolved. Mathematician and physicist Joseph Fourier was the first to describe the greenhouse effect in the early 19th century, and a few decades later John Tyndall realized that gases like carbon dioxide and water vapour are the principal causes, rather than the more abundant atmospheric constituents such as nitrogen and oxygen.

Carbon dioxide (CO2) gas is released when we burn fossil fuels, and the first person to quantify the effect that CO2 could have in enhancing the greenhouse effect was 19th-century Swedish chemist Svante Arrhenius. He calculated by hand that a doubling of CO2 in the atmosphere would ultimately lead to a 5–6 °C increase in global temperature – a figure remarkably close to current predictions. More detailed calculations in the late 1930s by British engineer Guy Callendar suggested a less dramatic warming of 2 °C, with a greater effect in the polar regions.

Meanwhile, at the turn of the 20th century, Norwegian meteorologist Vilhelm Bjerknes founded the science of weather forecasting. He noted that given detailed initial conditions and the relevant physical laws it should be possible to predict future weather conditions mathematically. Lewis Fry Richardson took up this challenge in the 1920s by using numerical techniques to solve the differential equations for fluid flow. Richardson’s forecasts were highly inaccurate, but his methodology laid the foundations for the first computer models of the atmosphere developed in the 1950s. By the 1970s these models were more accurate than forecasters who relied on weather charts alone, and continued improvements since then mean that today three-day forecasts are as accurate as one-day forecasts were 20 years ago.

But given that weather forecasts are unreliable for more than a few days ahead, how can we hope to predict climate, say, tens or hundreds of years into the future? Part of the answer lies in climate being the average of weather conditions over time. We do not need to predict the exact sequence of weather in order to predict future climate, just as in thermodynamics we do not need to predict the path of every molecule to quantify the average properties of gases.

In the 1960s researchers based at the Geophysical Fluid Dynamics Laboratory in Princeton, US, built on weather-forecasting models to simulate the effect of anthropogenic CO2 emissions on the Earth’s climate. Measurements by Charles Keeling at Mauna Loa, Hawaii, starting in 1957 had shown clear evidence that the concentration of CO2 in the atmosphere was increasing. The Princeton model predicted that doubling the amount of CO2 in the atmosphere would warm the troposphere – the lowest level of the atmosphere – but also cool the much higher stratosphere, while producing the greatest warming towards the poles, in agreement with Callendar’s early calculations.

The nuts and bolts of a climate model

The climate system consists of five elements: the atmosphere the ocean the biosphere the cryosphere (ice and snow) and the geosphere (rock and soil). These components interact on many different scales in both space and time, causing the climate to have a large natural variability and human influences such as greenhouse-gas emissions add further complexity (figure 1). Predicting the climate at a certain time in the future thus depends on our ability to include as many of the key processes as possible in our climate models.

At the heart of climate models and weather forecasts lie the Navier–Stokes equations, a set of differential equations that allows us to model the dynamics of the atmosphere as a continuous, compressible fluid. By transforming the equations into a rotating frame of reference in spherical coordinates (the Earth), we arrive at the basic equations of motion for a “parcel” of air in each of the east– west, north–south and vertical directions. Additional equations describe the thermodynamic properties of the atmosphere (see figure 2).

Unfortunately, there is no known analytical solution to the Navier–Stokes equations indeed, finding one is among the greatest challenges in mathematics. Instead, the equations are solved numerically on a 3D lattice of grid points that covers the globe. The spacing between these points dictates the resolution of the model, which is currently limited by available computing power to about 200 km in the horizontal direction and 1 km in the vertical, with finer vertical resolution near the Earth’s surface. Much greater vertical than horizontal resolution is needed because most atmospheric and oceanic structures are shallow compared with their width. The Navier–Stokes equations allow climate modellers to calculate the physical parameters – temperature, humidity, wind speed and so on – at each grid point at a single moment based on their values some time earlier. The time interval or “timestep” used must be short enough to give solutions that are accurate and numerically stable but the shorter the timestep, the more computer time is needed to run the model. Current climate models use timesteps of about 30 min, while the same basic models but with shorter timesteps and higher spatial resolution are used for weather forecasting.

However, some processes that influence our climate occur on smaller spatial or shorter temporal scales than the resolution of these models. For example, clouds can heat the atmosphere by releasing latent heat, and they also interact strongly with infrared and visible radiation. But most clouds are hundreds of times smaller than the typical computer-model resolution. If clouds were modelled incorrectly, climate simulations would be seriously in error.

Climate modellers deal with such sub-resolution processes using a technique called parametrization, whereby small-scale processes are represented by average values over one grid box that have been worked out using observations, theory and case studies from high-resolution models. Examples of cloud parametrization include “convective” schemes that describe the heavy tropical rainfall that dries the atmosphere through condensation and warms it through the release of latent heat and “cloud” schemes that use the winds, temperatures and humidity calculated by the model to simulate the formation and decay of the clouds and their effect on radiation.

Parametrizing interactions in the climate system is a major part of climate-modelling research. For instance, the main external input into the Earth’s climate is electromagnetic radiation from the Sun, so the way the radiation interacts with the atmosphere, ocean and land surface must be accurately described. Since this radiation is absorbed, emitted and scattered by non-uniform distributions of atmospheric gases such as water vapour, carbon dioxide and ozone, we need to work out the average concentration of different gases in a grid box and combine this with spectroscopic data for each gas. The overall heating rate calculated adds to the “source term” in the thermodynamic equation (see figure 2).

The topography of the Earth’s surface, its frictional properties and its reflectivity also vary on scales smaller than the resolution of the model. These are important because they control the exchange of momentum, heat and moisture between the atmosphere and the Earth’s surface. In order to calculate these exchanges and feed them into source terms in the momentum and thermodynamic equations, climate modellers have to parametrize atmospheric turbulence. Numerous other parametrization schemes are now being included and improved in state-of-the-art models, including sea ice, soil characteristics, atmospheric aerosols and atmospheric chemistry.

In addition to improving parametrizations, perhaps the biggest advance in climate modelling in the past 15 years has been to couple atmospheric models to dynamic models of the ocean. The ocean is crucial for climate because it controls the flux of water vapour and latent heat into the atmosphere, as well as storing large amounts of heat and CO2. In a coupled model, the ocean is fully simulated using the same equations that describe the motion of the atmosphere. This is in contrast to older “slab models” that represented the ocean as simply a stationary block of water that can exchange heat with the atmosphere. These models tended to overestimate how quickly the oceans warm as global temperature increases.

Forcings and feedbacks

The most urgent issue facing climate modellers today is the effect humans are having on the climate system. Parametrizing the interactions between the components of the climate system allows the models to simulate the large natural variability of the climate. But external factors or “radiative forcings” – which also include natural factors like the eruption of volcanoes or variations in solar activity – can have a dramatic effect on the radiation balance of the climate system.

The major anthropogenic forcing is the emission of CO2. The concentration of CO2 in the atmosphere has risen from 280 ppm to 380 ppm since the industrial revolution, and because it lasts for so long in the atmosphere (about a century) CO2 has a long-term effect on our climate.

While earlier models could tell us the eventual “equilibrium” warming due to, say, a doubling in CO2 concentration, they could not predict accurately how the temperature would change as a function of time. However, because coupled ocean–atmosphere models can simulate the slow warming of the oceans, they allow us to predict this “transient climate response”. Crucially, these state-of-the-art models also allow us to input changing emissions over time to predict how the climate will vary as the anthropogenic forcing increases.

Carbon dioxide is not the only anthropogenic forcing. For example, in 1988 Jim Hansen at the Goddard Institute for Space Studies in the US and colleagues used a climate model to demonstrate the importance of other greenhouse gases such as methane, nitrous oxide and chlorofluorocarbons (CFCs), which are also separately implicated in depleting the ozone layer. Furthermore, in the 1980s sulphate aerosol particles in the troposphere produced by sulphur in fossil-fuel emissions were found to scatter visible light back into space and thus significantly cool the climate. This important effect was first included in a climate model in 1995 by one of the authors (JM) and colleagues at the Hadley Centre. Aerosols also have an indirect effect on climate by causing cloud droplets to become smaller and thus increasing the reflectivity and prolonging the lifetime of clouds. The latest models include these indirect effects, as well as those of natural volcanic aerosols, mineral dust particles and non-sulphate aerosols produced by burning fossil-fuels and biomass.

To make matters more complex, the effect of climate forcings can be amplified or reduced by a variety of feedback mechanisms. For example, as the ice sheets melt, the cooling effect they produce by reflecting radiation away from the Earth is reduced – a positive-feedback process known as the ice–albedo effect. Another important feedback process that has been included in models in the past few years involves the absorption and emission of greenhouse gases by the biosphere. In 2000 Peter Cox, then at the Hadley Centre, showed that global warming could lead to the death of vegetation in regions such as the Amazonian rainforests through reduced rainfall as well as increased respiration from bacteria in the soil. Both will release additional CO2 to the atmosphere, leading, in turn, to further warming.

Improvements in computing power since the 1970s have been crucial in allowing additional processes to be included. Although current models typically contain a million lines of code, we can still simulate years of model time per day, allowing us to run simulations many times over with slightly different values of physical parameters (see for example This allows us to assess how sensitive the predictions of climate models are to uncertainties in these values. As computing power and model resolution increase still further, we will be able to resolve more processes explicitly, reducing the need for parametrization.

The accuracy of climate models can be assessed in a number of ways. One important test of a climate model is to simulate a stable “current climate” for thousands of years in the absence of forcings. Indeed, models can now produce climates with tiny changes in surface temperature per century but with year-on-year, seasonal and regional changes that mimic those observed. These include jet streams, trade winds, depressions and anticyclones that would be difficult for even the most experienced forecaster to distinguish from real weather, and even major year-on-year variations like the El Niño–Southern Oscillation.

Another crucial test for climate models is that they are able reproduce observed climate change in the past. In the mid-1990s Ben Santer at the Lawrence Livermore National Laboratory in the US and colleagues strengthened the argument that humans are influencing climate by showing that climate models successfully simulate the spatial pattern of 20th-century climate change only if they include anthropogenic effects. More recently, Peter Stott and co-workers at the Hadley Centre showed that this is also true for the temporal evolution of global temperature (see figure 3). Such results demonstrate the power of climate models in allowing us to add or remove forcings one by one to distinguish the effects humans are having on the climate.

Climate models can also be tested against very different climatic conditions further in the past, such as the last ice age about 9000 years ago and the Holocene warm period that followed it. As no instrumental data are available from this time, the models are tested against “proxy” indicators of temperature change, such as tree rings or ice cores. These data are not as reliable as modern-day measurements, but climate models have successfully reproduced phenomena inferred from the data, such as the southward advance of the Sahara desert over the last 9000 years.

Predicting the future

Having made our models and tested them against current and past climate data, what do they tell us about how the climate might change in years to come? First, we need to input a scenario of future emissions of greenhouse gases. Many different scenarios are used, based on estimates of economic and social factors, and this is one of the major sources of uncertainty in climate prediction. But even if greenhouse-gas emissions are substantially reduced, the long atmospheric lifetime of CO2 means that we cannot avoid further climate change due to CO2 already in the atmosphere.

Predictions vary between the different climate models developed worldwide, and due to the precise details of parametrizations within those models. Cloud parametrizations in particular contribute to the uncertainty because clouds can both cool the atmosphere through reflection or warm it by reduced radiative emissions. Such uncertainties led to a best estimate given in the third IPCC report in 2001 of global warming in the range 1.4–5.8 °C by 2100 compared with 1990.

Despite the uncertainties, however, all models show that the Earth will warm in the next century, with a consistent geographical pattern (figure 4). For example, positive feedback from the ice–albedo effect produces greater warming near the poles, particularly in the Arctic. Oceans, on the other hand, will warm more slowly than the land due to their large thermal inertia. Average rainfall is expected to increase because warmer air can hold a greater amount of water before becoming saturated. However, this extra capacity for atmospheric moisture will also allow more evaporation, drying of the soil and soaring temperatures in continental areas in summer.

Sea levels are predicted to rise by about 40 cm (with considerable uncertainty) by 2100 due largely to thermal expansion of the oceans and melting of land ice. This may seem like a small rise, but much of the human population live in coastal zones where they are particularly at risk from enhanced storm flooding – in Bangladesh, for example, many millions of people could be displaced. In the longer term, there are serious concerns over melting of the Greenland and West Antarctic ice sheets that could lead to much greater increases in sea level.

We still urgently need to improve the modelling and observation of many processes to refine climate predictions, especially on seasonal and regional scales. For example, hurricanes and typhoons are still not represented in many models and other phenomena such as the Gulf Stream are poorly understood due to lack of observations. We are therefore not confident of how hurricanes and other storms may change as a result of global warming, if at all, or how close we might be to a major slowing of the Gulf Stream.

Though they will be further refined, there are many reasons to trust the predictions of current climate models. Above all, they are based on established laws of physics and embody our best knowledge about the interactions and feedback mechanisms in the climate system. Over a period of a few days, models can forecast the weather skilfully they also do a remarkable job of reproducing the current worldwide climate as well as the global mean temperature over the last century. They also simulate the dramatically different climates of the last ice age and the Holocene warm period, which were the result of forcings comparable in size to the anthropogenic forcing expected by the end of the 21st century.

Although there may be a few positive aspects to global warming – for instance high-latitude regions may experience extended growing seasons and new shipping routes are likely to be opened up in the Arctic as sea ice retreats – the great majority of impacts are likely to be negative. Hotter conditions are likely to stress many tropical forests and crops while outside the tropics, events like the 2003 heatwave that led to the deaths of tens of thousands of Europeans are likely to be commonplace by 2050. This year is already predicted to be the hottest on record.

We are at a critical point in history where not only are we having a discernible effect on the Earth’s climate, but we are also developing the capability to predict this effect. Climate prediction is one of the largest international programmes of scientific research ever undertaken and it led to the 1997 Kyoto Protocol set up by the United Nations to address greenhouse-gas emissions. Although the protocol has so far led to few changes in atmospheric greenhouse-gas concentrations, the landmark agreement paves the way for further emissions cuts. Better modelling of natural seasonal and regional climate variations are still needed to improve our estimates of the impacts of anthropogenic climate change. But we are already faced with a clear challenge: to use existing climate predictions wisely and develop responsible mitigation and adaptation policies to protect ourselves and the rest of the biosphere.

Ask Ethan: Are 'Dark Comets' The Universe's Biggest Threat To Earth?

A comet or asteroid that struck Earth because it wasn't detected quickly enough is one of humanity's . [+] greatest natural threats. Image credit: NASA / Don Davis.

There are many cosmic catastrophes that could do us in, completely irrespective of anything that happens here on Earth. A star could pass into our Solar System and swallow up our planet whole, or eject us from our orbit and cause us to permanently freeze over. A supernova or gamma ray burst could go off too close to us, disintegrating all life on the Earth's surface. Or, as we know it did at least once before some 65 million years ago, a large, fast-moving object like a comet or asteroid could have a catastrophic collision with Earth. At least if we're prepared, we ought to see one coming and be able to take preparations. But what if there's no chance what if an incoming comet is somehow unseeable? David Bertone heard about that possibility, and wants to know!

I recently came across a few articles regarding dark comets, and to say the least it freaked me out! [. ] Is Napier right about the dark comets? Are they truly a threat to us [on] earth?

We have lots of threats to life on Earth, and getting struck by a large, fast-moving, unexpected object is certainly among them!

Comet Lovejoy, as seen from the International Space Station, poses no threat to Earth. Image credit: . [+] NASA / ISS.

Bill Napier is a scientist who studies potentially hazardous objects from outer space. He rightly points out that, while most efforts to catalogue the potential dangers to Earth focus on near-Earth objects like the asteroids that leave the main belt and cross Earth's orbit, that might not be a good reflection of what's actually likely to get us. Nor is it necessarily an asteroid orbiting interior to Jupiter or a comet orbiting exterior to the orbit of Neptune, just waiting to get perturbed and flung into the inner Solar System. There are plenty of objects orbiting in between the orbits of the four gas giants, known as centaurs, that could be hurtled inwards without any warning, and most of them have not been catalogued. Napier postulates that many of these centaurs may be invisible to us, even after being flung inwards, until it's far too late.

While asteroids (grey) and Kuiper Belt objects beyond Neptune (blue and orange) are generally . [+] considered Earth's greatest threats, the centaurs (green) number over 44,000. Image credit: WilyD at English Wikipedia.

But this brings up an important question: what could render a comet dark, or otherwise unseeable? It isn't simply going to be a comet that comes towards us from the outer Solar System that's terrible at reflecting light. Sure, a centaur could have had all its volatile ices boiled off over billions of years, reducing its reflectivity tremendously. As obvious as that seems, the amount of light the Sun emits is so extreme that even a medium-sized comet (or centaur) that absorbed 99.9% of the Sun's light would still be easily visible at the distance of Saturn. Moreover, comets tend to be made up of mostly ices, which are highly reflective and which get brought to the surface as a comet heats up. The only thoroughly "dark" bodies in our Solar System are more like our Moon, which still reflects light very brightly, as any casual watcher of the night sky will tell you. An object that was as dark as any naturally occurring, abundant element or compound would still be visible from its reflected sunlight, particularly if you looked in the infrared portion of the spectrum.

Infrared telescopes can see "dark" objects just as well as they can see bright ones. Image credit: . [+] NASA/JPL-Caltech.

But there are other possibilities to consider. What if an incoming, highly reflective comet were oriented bizarrely? What if it was quite icy, but reflected all the sunlight that struck it away from Earth, like some kind of strange crystal? It's less obvious, but that wouldn't work, either. When an object like that entered the planet-containing portion of the Solar System, it would heat up. Heat acting on the ices causes the development of a long tail that points away from the Sun, and this will be easily observable from one of many professional or even amateur all-sky surveys before too much time has passed.

Comet 67P/C-G as imaged by Rosetta. Image credit: ESA/Rosetta/NAVCAM — CC BY-SA IGO 3.0

But perhaps nature will conspire to make that tail unseeable from our point of view? In order for the tail to be hidden, the incoming comet would need to be directed straight at us, aligned so that the Sun, the Earth and the comet made a straight line. If the tail points directly away from us and is hidden behind the comet, that would render everything invisible, and we wouldn't be able to see it, right? Unfortunately, that's wrong, too. Comet tails don't simply point away from the Sun, they spread outwards away from a comet. Even a "head-on" comet like this would have a visible coma around it. Again, amateur or professional astronomers would catch this quickly.

The coma of comet 17P/Holmes was visible, even when the comet appeared nearly face-on. Image credit: . [+] Wikimedia Commons user Gil-Estel under a c.c.-by-2.5 license.

But there is a real danger of an invisible comet, and it's very different from the form that Napier envisions in any of his scenario. Imagine, if you will, that a bright, reflective, tail-and-coma-containing comet were headed right for us. Is there any direction it could approach us that you can think of that would render it completely unable to be seen? There is: from the direction of the Sun.

An X-class solar flare erupted from the Sun’s surface in 2012. Objects like asteroids or comets . [+] would be invisible against the brightness of the Sun, and you wouldn't dare point a telescope in that direction anyway. Image credit: NASA/Solar Dynamics Observatory (SDO) via Getty Images.

Telescopes don't dare point too close to the Sun, even from space, since even a glimmer of direct sunlight will ruin and fry your optical system. If any object -- comet, asteroid, centaur, even a kicked-up fragment from a collision with Mercury -- either approached the Sun from behind it (from our perspective) or were sling-shotted around it, the right trajectory could send it hurtling towards Earth. This is part of the reason why having NASA's STEREO satellites online is so important.

Conceptual drawing of NASA’s twin STEREO spacecraft monitoring the Sun. Image credit: NASA.

At this point, the technology to deflect an incoming asteroid or comet a significant amount in a short amount of time hasn't been developed, but at least by having a set of observatories at different locations in the Solar System, we could see everything that was headed for us. In the future, more sensitive infrared all-sky surveys will make a far more complete census of the centaurs in our Solar System, and the launch of WFIRST in the 2020s will help us map potentially hazardous objects to much greater distances than we've presently done. But the odds of a distant object being hurled into us after being perturbed for the first time are exceedingly small the much scarier prospect is of a long-period comet being kicked ever-so-slightly into Earth's orbital path.

The orbital path of Comet Swift-Tuttle, which passes perilously close to crossing Earth’s actual . [+] path around the Sun. Image credit: Howard of Teaching Stars, via

Comet Swift-Tuttle, which gave rise to the Perseids, is the single most dangerous object known to humanity, and has a chance to impact us with more than 20 times the energy of the legendary dinosaur-killer in the 4400s. But we've got plenty of time until that might happen. In the meanwhile, take heart in the fact that except for Sun-directed asteroids and comets, we can see everything large that could come headed our way. And if we're lucky enough to make it as a civilization for another thousand years or so, our technology will likely have advanced to the point where perhaps asteroid/comet deflection isn't such a daunting task after all!

The comet that gives rise to the Perseid meteor shower, Comet Swift-Tuttle, was photographed during . [+] its last pass into the inner Solar System in 1992. Image credit: NASA, of Comet Swift-Tuttle.

Perhaps if it is, there's always plan B: to clone Bruce Willis.

Ask your Ask Ethan questions by emailing startswithabang at gmail dot com.

14 ‘Death Stars’ tracked heading towards our solar system

A FLEET of ‘Death Stars’ is hurtling towards our Solar System. And this time destruction really could be rained down on our planet. but there’s time to prepare.

Astrophysicist Dr Alan Duffy talks 'Death Stars' and why they'll take 500,000 years to reach us.

Astrophysicist Dr Alan Duffy talks 'Death Stars' and why they'll take 500,000 years to reach us.

Ominous advance. Are ‘near misses’ by 14 stars over the next two million years really a threat? Source: Return of the Jedi/Disney Source:Supplied

THERE’S a fleet of ‘Death Stars’ headed our way. This time destruction really could be rained down upon us all: But there’s still time to linger over a Star Wars sequel or three (hundred thousand).

The wayward orange dwarf star HIP-85605 is just one of several stars detected as being on an intercept course with our solar system. Odds are as high as 90 per cent it will crash into our Oort cloud — an enormous 𠆋ubble’ of comets that surrounds our Sun — sometime between 240,000 and 470,000 years from now.

And it’s not the only one: There are possibly more than a dozen such �th Stars’ racing our way.

A paper to be published in the journal Astronomy & Astrophysics by astrophysicist Coryn Bailer-Jones of Germany’s Max Planck Institute for Astronomy reveals that there are 14 wandering stars that will pass within three light years of Earth.

HIP-85605 appears likely to become our closest encounter.

It is a cool K-class dwarf star currently 16 light years away, approaching from the direction of the Hercules constellation. It will likely skim past our Solar System at a mere 0.13 to 0.65 light years (roughly 8000 times the distance between the Earth and the Sun).

Another seems close behind. The star designated Gliese 710 has been calculated as having a 90 per cent chance of coming within our sphere of influence. Currently lurking some 64 light years away in the Serpens constellation, it’s expected to land in our neighbourhood sometime between 1.3 and 1.5 million years from now.

To put this in perspective: Our closest neighbouring star is Proxima Centauri, a red dwarf some 4 light years away

Close encounters . 14 wandering stars have been calculated as being on a ‘collision course’ with our Solar System’s Oort cloud. Original images: Solaris/20th Century Fox, Star Wars/Disney Source:Supplied

When Bailer-Jones extended his study to include all 50,000 stars upon which we have accurate 𠆏ixes’ for — including their distances, directions and velocities — it revealed 14 stars on courses that would bring them into range of our Solar System in the next two million years.

We’ve already experienced a ‘near-miss’: The white dwarf Van Maanen’s star — fortuitously 𠆋urnt-out’ — came close to our Sun some 15,000 years ago. Exactly what impact its passing had is as yet unknown.

It may sound like an interstellar game of billiards. In many ways it is. But even when they arrive, Swinburn University astrophysicist Dr Alan Duffy say’s not to expect being swallowed by a star.

The scale of things is simply too great.

Instead, the threat will be what happens to the masses of comets orbiting in our Solar System’s outskirts — out to as far as one light-year.

“Objects hardly ever meet in space — the distances are so huge — but the gravitational influence of a star is enormous, even something a light-year away can rattle the loosely held Oort Cloud objects,” Dr Duffy says. “(But) there’s no doubt that nearby stars in the past have nudged Oort objects into falling towards the inner solar system.“

Awesome power . A near-miss by a star is nothing to sneeze at, but is it really deadly? Source: Star Wars/Disney Source:Supplied


In the face of a fully armed an operational Death Star such as HIP-85605 or Gliese 710, Admiral Akbar would rightly say we �n’t repel firepower of that magnitude”.

Stars are impressive. They’re big. They’re fractious. They’re thermonuclear.

Even our own — a somewhat humble, small specimen — can cast worrisome clouds of superheated plasma our way.

So the real questions are: Are we really in the firing line, what is being fired at us — and what are the chances of a bullseye?

Expanded sphere . the Oort cloud is a vast bubble of comets around our Solar System. Source: NASA Source:Supplied

Interstellar push-and-pull:

We’re fairly safe down here. It’s taken millions of years, but Jupiter has managed to clean up most of the inner Solar System by pulling wayward comets and asteroids into its heavy embrace. But it is a fragile balance. Those distant, tumbling mountains of rock and ice can be sent wobbling towards us.

The fear is Earth — and all the inner planets — could be subjected to a bombardment not seen for millions of years: �th Star’ Gamma Microscopii (HIP103738) appears to have already drifted to within a light-year of our Sun by 3,850,000 years ago. There are two impact craters on Earth that could possibly be attributed to this event.

Tumultuous times . A NASA impression of a restless star. Source:Supplied

If the �th Star’ was a hot one, could its powerful UV radiation be added to the planet-killing arsenal it unleashes? Such radiation could tear apart the DNA of living all things once our thin line of defence — the ozone layer — is stripped away.

As Dr Duffy says, it𠆝 have to get extremely close — impossibly close — for its radiation and gravity to have any direct effect.

Gamma Microscopii, the G-7 giant which whizzed past some four million years ago, is believed to have been about 2.5 times the size of our own Sun. Its radiation is not known to have had any effect on our world.

“None of the stars that will likely come close to us are particularly large or bright meaning that they won’t affect the Earth with their UV or heating directly,” he says. 𠇊 star 100 times more luminous than our Sun would have to get as close to the Earth as Jupiter for it to be brighter than the Sun in our sky. If it’s a smaller star then it would have to get even closer. Long before then the gravity of this intruder would already have likely flung the Earth out of our orbit. Thankfully no star is predicted to come that close!”

Interstellar scale . An indication of the distances between our Solar System and its nearest neighbours. Source: NASA Source:Supplied

Each �th Star’ will take about 30,000 years to pass through our Oort Cloud. We will remain in their ‘spheres’ of influence’ for tens of thousands of years more. There is a remote chance one of them may reach the end of its life cycle and explode during that time.

“We see radioactive isotopes on Earth which point to nearby supernovas over the past few million years,” said Dr Bailer-Jones.

But Dr Duffy says the odds of one of these stars exploding when close enough to have any impact (and that means within several light years) is infinitesimal.

“However the main reason this isn’t a worry is that none of the nearby stars that are drifting towards us from this study are big enough to explode as a supernova,” he says.

Unstoppable force . If a star was really to collide with our Solar System, there’d be nothing we could do about it. Source: Return of the Jedi/Disney Source:Supplied


No. But these stars have a typically Imperial problem: Accuracy.

And, by the time the first of these Death Stars arrive, our race will either already be dead — or so highly evolved as to simply not care.

“While a direct collision by a large comet would be a disaster for the Earth we’re actually a very small target,” Dr Duffy says. 𠇊 much bigger target is the gas giant Jupiter with an enormous gravitational pull that attracts the comets and 𠆌leans’ up the Solar System. Yet even Jupiter is tiny on that scale so it’s much more likely that any comets that tumble in towards us will just pass harmlessly by.”

Even if these stars did tumble any Oort objects out of orbit, these could take up to two million years to drift into the inner planets.

Many of the stars set to cross our path are dwarf stars smaller than our own. This means the potential influence they may have on our Oort cloud will be limited.

Comet storm? Even a ‘hail’ of comets sent spiralling out of the Oort cloud would have little chance of hitting Earth. Source: NASA. Source:Supplied

We can get more Bothan-like data (without the casualties) right now: Instead of R2D2, the carrier of such secret plans would be the European Space Agency’s Gaia telescope. It is can help pinpoint the tracks of these deadly interstellar bodies to determine how many are on the way — and which ones may have done us damage in the past.

It’s currently cataloguing over a billion objects in our interstellar neighbourhood.

“The Gaia satellite is currently in space creating the best map yet of all the nearby stars allowing us to know to great accuracy just which of the billions of stars in our galaxy will likely ever get close enough to cause us problems,” Dr Duffy says.

The results are due to be in by 2016.

It’s just such data Dr Bailer-Jones is waiting for. He is now working on a study to figure out the probability of Earth being hit by one of these deflected comets.

𠇎ven though the galaxy contains very many stars,” Bailer-Jones told Universe Today, “the spaces between them are huge. So even over the (long) life of our galaxy so far, the probability of any two stars have actually collided — as opposed to just coming close — is extremely small.”

As always, the devil is in the detail.

Death throes . Even a supernova has to be close (in interstellar terms) to have a threat on Earth. Source: Star Wars/Disney Source:Supplied

Improvements coming

It was also direct confirmation that short bursts of gamma-ray radiation are linked to colliding neutron stars.

By combining information from gravitational waves and the light collected by telescopes, researchers also used a new technique to measure the expansion rate of the Universe. This technique was first proposed in 1986 by the University of Cardiff's Prof Bernard Schutz.

Prof Stephen Hawking of Cambridge University told BBC News that this was "the first rung of a ladder" for a new method of measuring distances in the Universe.

"A new observational window on the Universe typically leads to surprises that cannot yet be foreseen. We are still rubbing our eyes, or rather ears, as we have just woken up to the sound of gravitational waves," he said.

Prof Nial Tanvir, from Leicester University, uses the VISTA telescope in Chile.

He and his colleagues started searching for the neutron star collision as soon as they heard of the gravitational wave detection.

"We were really excited when we first got notification that a neutron star merger had been detected by LIGO," he said. "We stayed up all night analysing the images as they came in, and it was remarkable how well the observations matched the theoretical predictions that had been made."

LIGO is now being upgraded. In a year's time it will be twice as sensitive - and so will be able to scan eight times the volume of the space.

The researchers believe that detections of black holes and neutron stars will become common place. And they hope that they will begin to detect objects that they currently cannot even imagine and so usher in a new era of astronomy.

Watch the video: BIG NEWS ARRIVES from Congress to SEC as Bitcoin Price Chart and Altcoin Market Continue Ranging (May 2022).