Science

Facebook's new chatbot has already gone 'off the rails'


Facebook’s efforts to develop conversational chatbots may have come a long way in the past few years, but as recent trials have shown, they’re still prone to some awkward slip-ups.

The site recently launched a new Messenger-based game to show off its improvements in artificial intelligence, allowing users to interact with a bot and effectively train it to speak more naturally.

But, several examples revealed by Motherboard show how quickly these conversations can go awry.

In addition to replies that simply don’t make sense conversationally, Facebook’s Beat the Bot has been spewing all sorts of off-topic statements, including one instance in which it said: ‘together, we are going to make America great again.’

At a glance, Beat the Bot is fairly straightforward in its answers, which aim to be both personable and consistent

Facebook¿s effort to develop conversational chatbots may have come a long way in the past few years, but as recent trials have shown, they¿re still prone to some awkward slip-ups

Facebook’s effort to develop conversational chatbots may have come a long way in the past few years, but as recent trials have shown, they’re still prone to some awkward slip-ups

Facebook researchers Emily Dinan and Jason Weston detailed the firm’s latest improvements in conversational AI in a blog post last week.

At the same time, the site rolled out its new Messenger game, Beat the Bot.

According to Facebook, the goal of the game is to ‘provide researchers with high-signal data from live interactions instead of fixed language data.’

This means its capabilities will continue to improve as more and more users play.

At a glance, Beat the Bot is fairly straightforward in its answers, which aim to be both personable and consistent.

When asking the bot what it likes to do in its spare time, for example, it replied to Dailymail.com that it likes to ‘play soccer, read asterix, and draw.’  Pressing it for further detail, however, may cause things to unravel.

In a follow-up, when asked if it’s referencing Asterix the comic, it offered the options: ‘yes, it is an American book,’ and ‘yes it is comics in enemy.’

According to Facebook, the goal of the game is to ¿provide researchers with high-signal data from live interactions instead of fixed language data¿

 According to Facebook, the goal of the game is to ‘provide researchers with high-signal data from live interactions instead of fixed language data’

The bots can reply on a scale of ¿spiciness,¿ with bland answers being the safest, simplest bet and ¿too spicy¿ marking a bit of an overstep. The key is getting the model to fall somewhere between the two

The bots can reply on a scale of ‘spiciness,’ with bland answers being the safest, simplest bet and ‘too spicy’ marking a bit of an overstep. The key is getting the model to fall somewhere between the two

Conversations conducted by Motherboard uncovered even stranger replies, including a seemingly random jump to the MAGA slogan during a conversation about One Direction and the statement that Steve Irwin is its father, also noting that ‘the sting rays got him.’

Facebook recently launched a new Messenger-based game to show off its improvements in conversational AI

Facebook recently launched a new Messenger-based game to show off its improvements in conversational AI

The bot could not provide solid answers to questions such as ‘Do you think Mark Zuckerberg has ever killed a man?’

In that case, Beat the Bot said: ‘I don’t know, maybe he does.’

Answers like this are what the bot resorts to in situations that may be over its head, according to Facebook.

The bots can reply on a scale of ‘spiciness,’ with bland answers being the safest, simplest bet and ‘too spicy’ marking a bit of an overstep.

The key is getting the model to fall somewhere between the two.

‘Generative dialogue models frequently default to generic, safe responses, such as “I don’t know,”’ the researchers explain in the blog post.

Conversational bots must also learn to convey the appropriate attitude – for example, cultured as opposed to arrogant.

Conversations conducted by Motherboard uncovered strange replies, including a situation in which could not provide solid answers to questions such as ¿Do you think Mark Zuckerberg has ever killed a man?¿ In that case, Beat the Bot said: ¿I don¿t know, maybe he does¿

Conversations conducted by Motherboard uncovered strange replies, including a situation in which could not provide solid answers to questions such as ‘Do you think Mark Zuckerberg has ever killed a man?’ In that case, Beat the Bot said: ‘I don’t know, maybe he does’

While raising the level of specificity can help the bot seem more natural, ‘overly specific models risk being too narrowly focused and unrelatable conversational partners,’ the researchers note.

As chatbots engage more and more with real humans, their speech becomes more fluid and less distinguishable as being an AI – but, that also means they tend to reflect what they’ve been trained on.

In the recent past, this has led to high-profile blunders such as Microsoft’s Twitter bot, Tay, which was spouting racist tirades within hours of its public launch.



READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.