Science

Facebook rolled out a chatbot to advise employees on how to answer questions about its controversies


Facebook quietly rolled out a chatbot just for the holidays to advise employees on how to handle questions from family and friends about the firm’s controversies

Facebook quietly rolled out a chatbot to help employees tackle questions that family and friends may ask about its disastrous reputation over the holidays.

The tool, called ‘Liam Bot’, navigates employees through tough and uncomfortable questions about Facebook’s many controversies using a series of points.

The answers are said to have been written by Facebook’s public relationship team and lineup with the executive team’s statements on such things as free speech, election meddling and more.

The news was first reported by The New York Times (NYT), which noted Facebook began testing Liam Bot over the spring and made it live just in time for Thanksgiving dinner with family and friends.

The NYT noted that when a question is asked, such as one regarding hate speech on the platform, the chatbot will instruct the employee to answer with certain points.

Scroll down for video 

Facebook quietly rolled out a chatbot to help employees tackle questions that family and friends may ask about its disastrous reputation over the holidays. The tool, called ‘Liam Bot’, navigates employees through tough and uncomfortable questions about Facebook’s many controversies using a series of points

Facebook quietly rolled out a chatbot to help employees tackle questions that family and friends may ask about its disastrous reputation over the holidays. The tool, called ‘Liam Bot’, navigates employees through tough and uncomfortable questions about Facebook’s many controversies using a series of points

For example, ‘It has hired more moderators to police its content’ or ‘It is working on AI to spot hate speech’ and there is also ‘Regulation is important for addressing the issue’.

It will also provide links from Facebook’s blog and other relevant documents that focus on the matter.

A Facebook spokesperson told The New York Times, ‘Our employees regularly ask for information to use with friends and family on topics that have been in the news, especially around the holidays.’

‘We put this into a chatbot, which we began testing this spring.”

The answers are said to have been written by Facebook’s public relationship team and lineup with the executive team’s statements on such things as free speech, election meddling and more

The answers are said to have been written by Facebook’s public relationship team and lineup with the executive team’s statements on such things as free speech, election meddling and more

The Liam bot comes around a time when Facebook has been under scrutiny for a variety of controversies, all of which started with its role in the 2016 US president election.

Many claim the platform was flooded with misinformation that influenced Donald Trump’s victory.

The site was criticized for disseminating fake news stories, especially pro-Trump stories, in the days and weeks leading up the election.

Then just two years later, the Cambridge Analytica scandal occurred – millions of user profiles were harvested by Cambridge Analytica without their consent and used for political advertising purposes.

It was also revealed just last month that Facebook was testing a facial recognition app on its employees that was capable of identifying a person by pointing a smartphone camera at them. 

The shocking report was first revealed by Business Insider who cited anonymous sources who said the app was developed between 2015 and 2016, but has since been shut down.

Facebook did confirm that it developed the app, but denied it was capable of identifying members of its social media network and pulling up their profile.

The site itself has come under fire for allowing violent and explicit content stay posted, such as war crimes, pro-eating disorder and white supremacy groups and child abuse images. 

What is the Cambridge Analytica scandal?

Consultancy firm Cambridge Analytica had offices in London, New York, Washington, as well as Brazil and Malaysia.

The company boasted it can ‘find your voters and move them to action’ through data-driven campaigns and a team that includes data scientists and behavioural psychologists. 

In 2013, Cambridge professor Aleksandr Kogan used his app, This Is Your Digital Life, to ask 270,000 Facebook users questions about their personalities.

By answering them, the users granted Kogan access to not only their profiles but to those of their friends.

He subsequently sold that information to Cambridge Analytica for $51million.

Facebook changed its rules in 2014 to limit what data such apps could access.

A year later, the company learned that Kogan had sold his findings and contacted both him and Cambridge Analytica to tell them to delete the data which they promised to do.

In March, Facebook made its announcement that it had suspended Cambridge Analytica after being warned of looming media reports that claimed not all of the information had been destroyed.

Those reports, which were informed by the accounts of whistleblowers who worked at the firm, also revealed the true scale of the breach.  

It was initially estimated that the firm was able to mine the information of 55 million Facebook users even though just 270,000 people gave them permission to do so.

But Facebook later since revealed the number was actually as high as 87 million. 

The data firm suspended its chief executive, Alexander Nix, after recordings emerged of him making a series of controversial claims, including boasts that Cambridge Analytica had a pivotal role in the election of Donald Trump.

The same information is said to have been used to help the Brexit campaign in the UK.  

 



READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.