Science

'Deepfake' doctored videos of celebrities, politicians


Technology needed to doctor images and videos is advancing rapidly and getting easier to use, experts have warned.

Government agencies and academics are racing to combat so-called deepfakes, amid the spreading threat that they impose on societies.

Advances in artificial intelligence could soon make creating convincing fake audio and video relatively easy, which the Pentagon fears will be used to sow discord ahead of next year’s US presidential election.    

Deepfakes combine and superimpose existing images and videos onto source images or videos using a machine learning technique known as generative adversarial network. 

Scroll down for video  

The video that kicked off the concern last month was a doctored video of Nancy Pelosi, the speaker of the US House of Representatives. It had simply been slowed down to about 75 per cent to make her appear drunk, or slurring her words

The video that kicked off the concern last month was a doctored video of Nancy Pelosi, the speaker of the US House of Representatives. It had simply been slowed down to about 75 per cent to make her appear drunk, or slurring her words 

HOW DOES DEEPNUDE WORK?  

It is a downloadable offline app which works on Windows and Linux. 

It is believed the software is based on pix2pix, an open-source algorithm developed by University of California, Berkeley researchers in 2017. 

Pix2pix uses generative adversarial networks (GANs), which work by training an algorithm on a huge dataset of images.

A picture is inputted to the software and a nude version is then generated at the touch of a button.  

They are used to produce or alter video content so that it presents something that didn’t, in fact, occur.

They began in porn – there is a thriving online market for celebrity faces superimposed on porn actors’ bodies – but so-called revenge porn – the malicious sharing of explicit photos or videos of a person- is also a massive problem.

The video that kicked off the concern last month was a doctored video of Nancy Pelosi, the speaker of the US House of Representatives.

It had simply been slowed down to about 75 per cent to make her appear drunk, or slurring her words.

The footage was shared millions of times across every platform, including by Rudi Giuliani – Donald Trump’s lawyer and the former mayor of New York. 

The danger is that making a person appear to say or do something they did not has the potential to take the war of disinformation to a whole new level. 

The threat is spreading, as smartphones have made cameras ubiquitous and social media has turned individuals into broadcasters.

This leaves companies that run those platforms, and governments, unsure on how to tackle the issue. 

‘While synthetically generated videos are still easily detectable by most humans, that window is closing rapidly said Jeffrey McGregor, chief executive officer of Truepic, a San Diego-based startup that is developing image-verification technology told the Wall Street Journal.

‘I’d predict we see visually undetectable deepfakes in less than 12 months,’ said Jeffrey McGregor, chief executive officer of Truepic, a San Diego-based startup that is developing image-verification technology.’

‘Society is going to start distrusting every piece of content they see.’

McGregor’s company Truepic is working with Qualcomm Inc. – the biggest supplier of chips for mobile phones – to add its technology to the hardware of cellphones.

The technology would automatically mark photos and videos when they are taken with data such as time and location, so that they can be verified later. 

Truepic also offers a free app consumers can use to take verified pictures on their smartphones.

The goal is to create a system similar to Twitter’s method of verifying accounts, but for photos and videos, Roy Azoulay, the founder and CEO of Serelay, a U.K.-based startup that is also developing ways to stamp images as authentic when they are taken, told the WSJ.

When a photo or video is taken, Serelay can capture data such as where the camera was in relation to cellphone towers or GPS satellites.  

Meanwhile, the U.S. Defence Department is researching forensic technology that can be used to detect whether a photo or video was manipulated after it was made. 

The forensic approach will look for inconsistencies in pictures and videos to serve as clues to whether images have been doctored, for example, inconsistent lighting.

Last month, Facebook was forced to evaluate how it will handle deepfake’ videos, the hyper-realistic hoax clips made by artificial intelligence and high-tech tools.

CEO Mark Zuckerberg suggested that it might make sense to handle such videos differently from other forms of misinformation, such as fake news.

The fact that these videos are made so easily and then widely shared across social media platforms does not bode well for 2020, said Hany Farid,  pictured, a digital forensics expert at the University of California, Berkeley.

 The fact that these videos are made so easily and then widely shared across social media platforms does not bode well for 2020, said Hany Farid,  pictured, a digital forensics expert at the University of California, Berkeley.

His comments on the scourge of Deepfakes come as he defends the decision of Facebook to keep the doctored clip of House Speaker Nancy Pelosi live on its site. 

Facebook has long held that it should not be arbitrating between what is and is not true, placing such judgements instead in the hands of external fact-checkers.  

The recent altered video of House Speaker Nancy Pelosi that made her sound like she was slurring her words does not meet the definition of a Drepfake and remained on the site.

In fact Facebook had refused to take down that Deepfake of Mrs Pelosi, instead opting to ‘downrank’ the video in an effort to minimise its spread. 

The fact that these videos are made so easily and then widely shared across social media platforms does not bode well for 2020, said Hany Farid, a digital forensics expert at the University of California, Berkeley.

‘The clock is ticking,’ Mr Farid said. ‘The Nancy Pelosi video was a canary in a coal mine.’

Social media companies don’t have clear-cut policies banning fake videos, in part because they don’t want to be in the position of deciding whether something is satire or intended to mislead people – or both. 

Doing so could also open them to charges of censorship or political bias.

 

 

 



READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.