Science

Twitter is working to decide whether it should ban white supremacists from its platform


Twitter is working with academic researchers to decide whether it should ban white supremacists from its platform

  • Academics analyzing whether or not to ban white supremacists and nationalists
  • They will look at what roles Twitter plays in making discussion worse or better
  • Comes as Twitter faces criticism for its failure to crack down on hateful speech 

Twitter says it’s looking into whether or not white supremacists should be allowed on its platform, amid increasing calls for a crackdown on extremist content. 

The social media giant is examining how white nationalists and supremacists use its platform to help it decide whether the groups should be banned, or if they should be allowed to continue to post so that other users can debate them, according to Motherboard.   

It comes as Twitter has faced criticism over the plethora of extremist content shared on its site and the fact that it has taken few measures to curb hateful rhetoric.

Scroll down for video 

Twitter is examining how white nationalists and supremacists use its platform to help it decide whether the groups should be banned, or if they should be allowed to continue to post

Twitter is examining how white nationalists and supremacists use its platform to help it decide whether the groups should be banned, or if they should be allowed to continue to post

Researchers are looking at what roles Twitter plays in making conversations around white nationalism and white supremacy worse or better.  

From there, it hopes to have a better idea of whether or not banning these groups would be the right move. 

‘Is it the right approach to deplatform these individuals? Is the right approach to try and engage with these individuals? How should we be thinking about this? What actually works?’ Vijaya Gadde, Twitter’s head of trust and safety, told Motherboard. 

Last month, Twitter CEO Jack Dorsey and Gadde met with President Donald Trump to discuss the ‘health of public conversations on the site. 

Twitter has become notorious for its characteristically slow responses to pressing problems on the site, such as abuse, trolls and hateful content.  

For that reason, many aren’t surprised by the company’s decision to look into the issue of white supremacists and white nationalism several years after these kinds of content started to become amplified on Twitter. 

‘The idea that they are looking at this matter seriously now as opposed to the past indicates the callousness with which they’ve approached this issue on their platform,’ Angelo Carusone, president of Media Matters, told Motherboard. 

Twitter CEO Jack Dorsey (pictured) has been repeatedly criticized for his company's characteristically slow responses to pressing issues like abuse, trolls and harassment

Twitter CEO Jack Dorsey (pictured) has been repeatedly criticized for his company’s characteristically slow responses to pressing issues like abuse, trolls and harassment

Similarly, Heidi Beirich, director of the intelligence project at the Southern Poverty Law Center, told Motherboard it has been proven that white supremacists continue to thrive on Twitter. 

‘Twitter has David Duke on there; Twitter has Richard Spencer,’ she told Motherboard. 

‘They have some of the biggest idealogues of white supremacy and people whose ideas have inspired terrorist attacks on their site, and it’s outrageous.’

Twitter has taken some steps to crack down on extremism, joining Facebook, YouTube, Spotify, LinkedIn and others last year in banning right-wing conspiracy theorist Alex Jones and his Infowars show from its platform. 

In other ways, Twitter and several social media platforms have yet to fully reckon with the amount of extremist content on their platforms. 

YouTube has also become a popular destination for white nationalism and supremacy, but it has so far refused to ban either forms of content from its site. 

So far, the only major social media platform to take a stand against white nationalism and white separatism is Facebook, which banned those kinds of posts in March.  

Posts that include statements like ‘I am a proud white nationalist’ and ‘Immigration is tearing this country apart’ will immediately be banned.

If a user tries to publish a post around these themes, they’ll instead be redirected to a nonprofit called Life After Hate, which helps individuals involved in these extremist groups exit them safely. 

WHAT IS TWITTER’S POLICY ON HATE SPEECH?

Twitter says it does not tolerate behaviour that harasses, intimidates, or uses fear to silence other social network users.

Twitter users that violate these rules could find their content deleted, or their access to the account suspended by the social network.

What does Twitter forbid?

According to the company, it will remove any tweets that do the following —

  • Threaten physical violence
  • Promote attacks on the basis of their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease 
  • References to mass murder, violent events, or specific means of violence in which such groups are the primary targets or victims
  • Incites fear about a certain protected group
  • Repeated use of non-consensual slurs, epithets, racist and sexist tropes
  • Content designed to degrade a specific user     

Twitter users can target individuals or specific groups in a number of manners, for example using the @ mention feature, or tagging a photo. 

How does Twitter enforce these rules?

According to the company, the first thing it does whenever an account or tweet is flagged as inappropriate is check the context.

Twitter says: ‘some Tweets may seem to be abusive when viewed in isolation, but may not be when viewed in the context of a larger conversation.

‘While we accept reports of violations from anyone, sometimes we also need to hear directly from the target to ensure that we have proper context.’

Twitter says the total number of reports received around an individual post or account does not impact whether or not something will be removed.

However, it could help Twitter prioritise the order in which it looks through flagged tweets and accounts.

What happens if you violate Twitter’s policy? 

The consequences for violating our rules will vary depending on the severity of the violation and the person’s previous record of violations, Twitter says. 

The penalties range from requesting a user voluntarily remove an offending tweet, to suspending an entire account. 



READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.