Leiden University computer algorithm spots ELEVEN asteroids that could hit Earth triggering ‘unprecedented devastation’ — but were ‘MISSED by NASA software’

  • Researchers used a supercomputer to model the solar system over 10,000 years
  • Within this model, they next simulated asteroids that would eventually hit Earth
  • The library of fake hazards was used to train an algorithm to spot real ones
  • This software — dubbed HOI — is capable of being run on just a simple laptop
  • The new hazardous asteroids it flagged won’t get close to Earth until 2131–2923

A computer algorithm from Leiden University in the Netherlands has spotted eleven asteroids that could eventually hit Earth and cause ‘unprecedented devastation’.

All were missed by NASA software thanks to their chaotic orbits, which are difficult for current techniques to predict and identify as being potentially dangerous.

Each are more than 328 feet (100 metres) in diameter and will pass closer to our planet than ten times the distance between the Earth and the moon.

For comparison, the Tunguska object which flattened 772 square miles of forest in Siberia had a diameter of around 164–262 feet (50–80 metres). 

However, these space rocks won’t pose a threat in our lifetime, however — for they will only get worryingly near to Earth between the years 2131 and 2923.

Scroll down for video

In their study, astronomer and simulation expert Simon Portegies Zwart and colleagues trained a so-called neural network — an algorithm modelled on the human brain that can study patterns — to hunt for potentially dangerous asteroids.

To do this, they first modelled the orbits of the Sun and its planets over the next 10,000 years, tracking the relative positions of the different bodies.

Next, they gradually ‘rewound’ this simulation, while modelling what would happen if they ‘threw’ asteroids off of the Earth and out into space.

When run forward again, this created a database of imaginary asteroids of known orbits that eventually collide with the Earth — and let the team study the distribution of these asteroids at the equivalent of today’s date in the simulation.

The researchers then used this library of imaginary hazardous asteroids to train the neural network in how to identify possibly dangerous asteroids in the real world. 

The first set of calculations — modelling the solar system and the artificial asteroids — were all undertaken using Leiden’s powerful new supercomputer, ALICE.

However, once trained in the art of asteroid watching, the neural network itself can be run on just a simple laptop. 

The team have dubbed their system the Hazard Object Identifier, or ‘HOI’ — which means ‘hello’ in Dutch.

When put through its paces, the researchers found that HOI can spot well-known hazardous near-Earth asteroids — proving 90.99 per cent accurate when tested against a NASA database of 2,000 space objects.

Furthermore, it was also able to pick out the eleven new asteroids that had previously not been classified as a potential hazard. 

‘We now know that our method works, but we would certainly like to delve deeper in the research with a better neural network and with more input,’ said Professor Portegies Zwart.

‘The tricky part is that small disruptions in the orbit calculations can lead to major changes in the conclusions.’

Despite this, the researchers hope that in future artificial neural networks might be used to detect potentially hazardous objects in space — and do so far faster than the methods used by space agencies today.

Identifying asteroids that are on a collision course with the Earth earlier would allow humanity to develop a strategy to prevent an impact, the team said.

The full findings of the study were published in the journal Astronomy & Astrophysics


Deep learning is a form of machine learning concerned with algorithms which have a wide range of applications. 

It is a field which was inspired by the human brain and focuses on building artificial neural networks.

It was formed originally based on brain simulations and to allow learning algorithms to become better and easier to use. 

Processing vast amounts of complex data then becomes much easier and allows researchers to trust algorithms to draw accurate conclusions based on the parameters the researchers have set. 

Task-specific algorithms which exist are better for specific tasks and goals but deep-learning allows for a wider scope of data collection. 

READ  Experts confirm object flying through our solar system is a comet, will be visible for months



Please enter your comment!
Please enter your name here