deepfake

A rising sense of uneasiness has developed around advancing deepfake technologies, which make it feasible to create evidence of situations that never occurred. Celebrities have become unintentional pornographic stars, and politicians have appeared in videos claiming to say things they never actually uttered.

Concerns about deepfakes have resulted in an explosion of countermeasures. New laws are being enacted to prevent anyone from creating and sharing them. Earlier this year, deepfakes were prohibited from social media sites like Facebook and Twitter. Talks explaining techniques to guard against them abound at computer vision and graphics conferences.

So what exactly is a deepfake, and why are people so worried about them?

What is a deepfake?

Deepfake technology can flawlessly merge anybody in the globe into a video or picture in which they did not take part. Such talents have existed for decades, which is how late actor Paul Walker was revived for Fast & Furious 7. However, it used to take a year for whole studios full of professionals to create similar effects. Deepfake technologies, new autonomous computer graphics or machine-learning algorithms, can now swiftly create pictures and videos.

However, there is a lot of misunderstanding around the term “deepfake,” and computer vision and graphics academics are unified in their dislike of it. It has become a catch-all term for anything from cutting-edge AI-generated videos to possibly false images.

A lot of what is referred to as a deepfake isn’t. For example, a contentious “crickets” video of the U.S. Democratic primary debate produced by former presidential candidate Michael Bloomberg’s campaign was made using ordinary video editing abilities. Deepfakes had no part in this.

How deepfakes are created

Machine learning is the primary element in deepfakes, making it feasible to make deepfakes considerably quicker and cheaper. To make a deepfake video of someone, a developer would first train a neural network using several hours of genuine video footage of the person to give it a realistic “understanding” of what they look like from various perspectives and lighting conditions. The trained network would then be used with computer graphics methods to superimpose a replica of the person onto a different actor.

While the integration of A.I. makes the process quicker, producing a plausible composite that sets a person in an imaginary setting still takes time. The creator must manually alter several of the trained program’s settings to prevent blips and artifacts in the image. The process is more complex.

Many experts believe that generative adversarial networks (GANs), a kind of deep-learning algorithm, will be the primary engine of deepfake development in the future. GAN-generated faces are almost hard to distinguish from real-life faces. The initial deepfake landscape audit dedicated a full section to GANs, implying that they would make it feasible for anybody to create advanced deepfakes.

But Siwei Lyu of SUNY Buffalo says that the focus on this specific approach needs to be more accurate. He says, “Most deepfake videos these days are generated by algorithms in which GANs don’t recreate a very prominent role.”

GANs are difficult to work with and need large training data. It takes longer for the models to create photos than it would with other approaches. And most importantly, GAN models are excellent for image synthesis rather than for creating videos. They need help to maintain temporal consistency or to keep the same image aligned from one frame to the next.

The most well-known audio “deepfakes” do not employ GANs either. GANs were not utilized when the Canadian A.I. startup Dessa (now purchased by Square) exploited the voice of talk show presenter Joe Rogan to say phrases he never spoke. In reality, the vast majority of deepfakes made today are made utilizing a combination of AI and non-AI techniques.

Who created deepfakes?

The most impressive deepfake examples come from university labs and the startups they support: a widely circulated video of soccer star David Beckham speaking fluently in nine languages, only one of which he speaks, is a version of code developed at Germany’s Technical University of Munich.

In addition, MIT researchers have published a strange video of former U.S. President Richard Nixon giving the alternative address he had planned for the country if Apollo 11 failed.

However, these are not the deepfakes that have governments and academia worried. Deepfakes don’t have to be lab-grade or high-tech to be harmful to society, as nonconsensual pornographic deepfakes and other problematic forms demonstrate.

Indeed, deepfakes receive their name from the ur-example of the genre, which was created in 2017 by a Reddit user calling himself r/deepfakes, who used Google’s open-source deep-learning toolkit to swap porn performers’ faces for those of actresses. The programs within DIY deepfakes today are primarily inherited from this initial code, and although some may be amusing thought exercises, none are convincing.

So, what’s everyone so worried about? “Technology is always improving.” “That’s just the way it is,” says Hany Farid, a digital forensics specialist at the University of California, Berkeley. The scientific community is divided on when DIY approaches will be perfected sufficiently to constitute a serious threat—predictions range from 2 to 10 years. Experts agree that someday, anybody can use a smartphone app to create convincing deepfakes of anyone else.

What are deepfakes used for?

Deepfakes offer the biggest danger to women right now—nonconsensual pornography accounts for 96 percent of deepfakes now deployed on the Internet. Most target celebrities, but there are rising instances of deepfakes being used to make fake revenge porn, says Henry Ajder, who is head of research at Amsterdam-based detection business Deeptrace.

Bullying will not, however, be limited to women. Deepfakes may likely allow bullying more widely, whether in schools or workplaces, since anybody may throw individuals into absurd, hazardous, or compromising settings.

Corporations are concerned about the role deepfakes might play in amplifying schemes. Unconfirmed reports suggest that deepfake audio is utilized in CEO scams to trick staff into delivering money to crooks. Extortion might become a significant use case. Over three-quarters of respondents to a cybersecurity industry study conducted by biometric company iProov cited identity theft as their top concern about deepfakes. The main fear expressed by respondents was that deepfakes would be utilized to make fraudulent online payments and hack into personal financial services.

The greater concern for governments is that deepfakes endanger democracy. If you can make a female star appear in a porn video, you can make a politician seeking re-election do the same. In 2018, a video emerged depicting Joo Doria, the married governor of So Paulo, Brazil, taking part in an orgy. He was certain that it was a deepfake. There have been additional instances. In 2018, Gabon’s president, Ali Bongo, who had been considered ill for some time, appeared on a strange video to comfort the populace, igniting an attempted coup.

The ambiguity around these unconfirmed cases means to the biggest danger of deepfakes, whatever their current capabilities: the liar’s dividend, which is a fancy way of stating that the very existence of deepfakes provides shelter for anyone to do anything they want because they can ignore any evidence of wrongdoing as a deepfake. It’s a one-size-fits-all approach to plausible deniability. “That is something you are starting to see,” says Farid, “that liar’s dividend being used to get out of trouble.”

How do we stop malicious deepfakes?

Several laws concerning deepfakes have been enacted in the United States last year. States are proposing legislation to ban deepfake pornography and restrict the use of deepfakes during elections. Deepfake porn is prohibited in Texas, Virginia, and California, and the president signed the first federal ban as part of the National Defense Authorization Act in December. However, these new laws are only effective if the criminal resides in one of those areas.

Outside the United States, China and South Korea are the only governments taking particular steps to outlaw deepfake fraud. To address various ways of making deepfakes, the law commission in the United Kingdom is now revising existing laws regarding revenge porn. However, compared to other types of internet disinformation, the European Union does not consider this a pressing concern.

So, although the United States is at the forefront, there needs to be evidence that the proposed laws are enforceable or are the proper priority.

While numerous research labs have devised unique ways to identify and detect altered videos—incorporating watermarks or a blockchain, for example—it isn’t easy to create deepfake detectors that are not quickly gamed to create more convincing deepfakes.

Nonetheless, I.T. businesses are making an effort. Facebook enlisted the cooperation of experts from Berkeley, Oxford, and other universities to develop a deepfake detector to assist it in enforcing its new prohibition. Twitter has made significant adjustments to its policy, going so far as to purportedly develop ways to tag any deepfakes that are not deleted entirely. In February, YouTube emphasized that it would not allow deepfake videos relating to the United States election, voting processes, or the 2020 census.

But what about deepfakes beyond these fortified walls? Reality Defender and Deeptrace are two apps that seek to keep deepfakes out of your life. Deeptrace’s API acts as a hybrid antivirus/spam filter, prescreening incoming media and directing evident manipulations to a quarantine zone, similar to how Gmail automatically diverts spam before it enters your inbox. Reality Defender, a tool developed by the startup A.I. Foundation, intends to tag and bag distorted pictures and videos before they harm. “We think it’s unfair,” Adjer says.

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here