deepfake

a.k.a. deepfakes, deep fake, deep fakes, deep learning, deepfake videos, dialogue replacement technology, Internet hoax
A sophisticated type of software that makes it possible to superimpose one person’s face onto another person’s body and manipulate voice recordings, creating fake videos that look and sound real. A deepfake is a manipulated video that can turn anyone into someone else or into a audiovisual puppet.

At the core of the deepfakes code is a “deep neural network,”a computing system vaguely modeled on the biological neural networks that make up human brains. Such systems “learn,” or progressively improve their performance, by taking in and analyzing vast amounts of data, acquainting themselves with the information via trial and error, and adjusting to feedback about what’s wrong and right. Like a brain, AI networks reprogram themselves by reacting to patterns in incoming data, rather than relying on fixed rules. FakeApp uses a suite of neural networking tools that were developed by Google’s AI division and released to the public in 2015. The software teaches itself to perform image-recognition tasks through trial and error. First, FakeApp trains itself, using “training data” in the form of photos and videos. Then it stitches the face onto another head in a video clip, accurately preserving the facial expression in the original video.

Historical perspective: According to The Week, Hollywood studios have long used computer-generated imagery (CGI) to create fleeting appearances of dead actors, for example. But the process used to be prohibitively expensive and laborious. By 2018, this kind of technology improved so much that highly realistic visual and audio fakery can be produced by anyone with a powerful home computer. This has already resulted in a cottage industry of fake celebrity online porn. Fears are growing over how else “deepfake” videos could be used, from smearing politicians in elections to inciting major international conflict. Earlier in 2018, BuzzFeed.com created a “public service announcement” warning of the technology’s dangers, with a deepfake of former President Barack Obama voiced by the comedian and director Jordan Peele. “We’re entering an era,” the fake Obama says, “in which our enemies can make it look like anyone is saying anything.” To illustrate the point, the fake Obama goes on to call President Trump “a total and complete dips---.” Here are several FAQs about this new technology that makes it alarmingly easy to make realistic videos of people saying and doing things they’ve never done.

Where did deepfakes originate?
In porn, of course. In December 2017, an anonymous Reddit user who calls himself “deepfakes” started posting fake but realistic-looking videos of celebrities engaged in explicit sex. By January, the “deepfake” technology had been shared through a free app, FakeApp, which has since been downloaded more than 120,000 times. FakeApp and its imitators sparked an explosion of fake pornography online, with Michelle Obama, Ivanka Trump, and Emma Watson among those most frequently victimized. But it’s not all porn. The technology has also been used to create harmless spoof and parody videos, inserting Reddit cult figure Nicolas Cage into films in which he didn’t appear.

How do deepfakes work?
The creator gathers a trove of photos or videos of the target, so it helps if it’s a famous person, along with the video to be doctored. The video maker then feeds the data into the app, which uses a form of artificial intelligence (AI) known as “deep learning”—hence deepfake—to combine the face in the source images with the chosen video. This process requires a sizable graphics processing unit and a vast amount of memory. It’s also time-consuming: the Obama/Peele video took 56 hours to make and the quality is variable. But the technology is improving fast. Tech expert Antonio García Martínez, writing for Wired, says we’ll soon be able to superimpose anyone’s face onto anyone else’s, creating uncannily authentic videos of just about anything.

How are voices faked?
The principle is the same: You feed lots of recordings of the person you want to fake into an AI program, which chops up sounds and words into discreet bits; software can then rearrange the sounds so the subject can say anything you like. A team of sound engineers recently used deep-learning software to analyze 831 of John F. Kennedy’s speeches, and then created a convincing approximation of the 35th president reading the speech he was due to deliver the day he was assassinated. Researchers at the University of Washington last year synthesized realistic videos of Barack Obama speaking by mapping audio from one speech onto an existing video of him talking.

How much trouble can this cause?
Potentially, a lot. On deepfake forums, there are frequent requests for help in producing face-swap porn videos of ex-girlfriends, classmates, and teachers. In the public sphere, the technology could be even more toxic. Fake videos could show soldiers committing atrocities, or world leaders declaring war on another country, triggering an actual military response. Deepfakes could be used to damage the reputation of a politician, or a political party, or an entire country. And if fake videos become commonplace, people may start assuming real videos are fake, too. That skepticism could be corrosive. It’ll only take a couple of big hoaxes to really convince the public that nothing’s real.

Can deepfakes be stopped?
To reduce the potential dangers of deepfakes, videos can be equipped with a unique digital key that proves their origin, or with metadata showing where and when they were captured. Artificial intelligence can be trained to recognize deepfakes and remove them from websites. Deepfakes have already been banned from many porn sites, as well as from Twitter. Ultimately, though, the genie is out of the bottle. FakeApp’s creator, “deepfakeapp,” another Reddit user, told ViceNews.com he wanted to give everyday people the opportunity to use technology previously limited to big-budget SFX companies. Most tech experts say people will simply have to adapt to this new normal, by recalibrating their trust in the once unimpeachable medium of video. Soon, we won’t be able to trust our own eyes.

By 2019, deepfake videos became weapons to harass and humiliate women with fake porn, according to Engadget.com, and there’s no solution in sight. Actress Scarlett Johansson, among the most prominent victims of the technology, says that it has gotten so bad that trying to fight the deepfakes is a lost cause. “You want to tear everything off the internet,” Johansson says, “But you can’t.” While many of the worries around technology that uses AI to create convincing fake videos have centered on national security, such videos have proliferated in porn. One such video that used Johansson’s image garnered 1.5 million views.

Also in 2019, two artists posted a fake video of Mark Zuckerberg onto Facebook-owned Instagram to test the platform’s policies on spreading misinformation, according to The Washington Post. Facebook has come under fire for refusing to delete a viral video of House Speaker Nancy Pelosi that was edited to make her sound as if she were drunkenly slurring her words. The Zuckerberg deepfake—a sophisticated altered video—appeared to show the Facebook CEO bragging about abusing stolen data in a segment from CBS News’ streaming channel. “Imagine this for a second: one man with total control of billions of people’s stolen data,” Zuckerberg appears to say. But the words “were actually spoken by a voice actor reading from a script” and dubbed over Zuckerberg’s image by “dialogue replacement technology.”
NetLingo Classification: Net Technology

Updates