Science

Teaching AI how to feel FEAR could make autonomous cars better drivers, study suggests


Teaching AI how to feel FEAR could make autonomous cars better drivers, study suggests

  • Microsoft researchers used pulse sensors to track people’s fear responses
  • Signals were then used to train algorithm and guide it through driving simulation
  • They found that those which  learned to fear experienced fewer crashes in tests 

Artificial intelligence has become exceedingly advanced in recent years, so much so that the prospect of self-driving cars on city roads is no longer a far-off concept.

But despite their current capabilities, there’s one thing humans have on our side that AI inherently doesn’t have – fear.

Physiological responses driven by fear help humans make critical decisions and stay on our toes, especially when it comes to situations like driving.

In a new study, Microsoft researchers build on this idea to improve the decision-making skills of self-driving cars, in effort to develop ‘visceral machines’ that will learn faster and make fewer mistakes. 

AI has become exceedingly advanced in recent years, so much so that the prospect of self-driving cars on city roads is no longer a far-off concept. Despite their current capabilities, there’s one thing humans have on our side that AI inherently doesn't have – fear. File photo

AI has become exceedingly advanced in recent years, so much so that the prospect of self-driving cars on city roads is no longer a far-off concept. Despite their current capabilities, there’s one thing humans have on our side that AI inherently doesn’t have – fear. File photo

The team detailed their findings in a paper presented at the 2019 International Conference on Learning Representations (ICLR).

To teach AI to ‘feel’ fear, the researchers used pulse sensors to track peoples’ arousal while using a driving simulator.

These signals were then fed to the algorithm to learn which situations caused a person’s pulse to spike.

‘As people learn to navigate the world, autonomic nervous system (e.g., “fight or flight”) responses provide intrinsic feedback about the potential consequence of action choices (e.g., becoming nervous when close to a cliff edge or driving fast around a bend.),’ authors Daniel McDuff and Ashish Kapoor explain in the paper’s abstract.

‘Physiological changes are correlated with these biological preparations to protect one-self from danger.’

According to the researchers, teaching the algorithm when a person might feel more anxious in a given situation could serve as a guide to help machines avoid risks.

‘Our hypothesis is that such reward functions can circumvent the challenges associated with sparse and skewed rewards in reinforcement learning settings and can help improve sample efficiency,’ the team explains.

The researchers put the autonomous software through a simulated maze filled with walls and ramps to see how they performed with fear instilled in them.

And, compared to an AI that was trained based only on wall proximity, the system that had learned fear was much less likely to crash.

‘A major advantage of training a reward on a signal correlated with the sympathetic nervous system responses is that the rewards are non-sparse – the negative reward starts to show up much before the car collides,’ the researchers wrote.

‘This leads to efficiency in training and with proper design can lead to policies that are also aligned with the desired mission.’

But, there are caveats.

‘While emotions are important for decision-making, they can also detrimentally effect decisions in certain contexts,’ the researchers note.

‘Future work will consider how to balance intrinsic and extrinsic rewards and include extensions to representations that include multiple intrinsic drives (such as hunger, fear and pain).’

HOW DOES ARTIFICIAL INTELLIGENCE LEARN?

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn.

ANNs can be trained to recognise patterns in information – including speech, text data, or visual images – and are the basis for a large number of the developments in AI over recent years.

Conventional AI uses input to ‘teach’ an algorithm about a particular subject by feeding it massive amounts of information.   

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information - including speech, text data, or visual images

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information – including speech, text data, or visual images

Practical applications include Google’s language translation services, Facebook’s facial recognition software and Snapchat’s image altering live filters.

The process of inputting this data can be extremely time consuming, and is limited to one type of knowledge. 

A new breed of ANNs called Adversarial Neural Networks pits the wits of two AI bots against each other, which allows them to learn from each other. 

This approach is designed to speed up the process of learning, as well as refining the output created by AI systems. 

 





READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.