feature article
Subscribe Now

Deepfake Video is Here. Reality is Fleeting.

Even the DoD is Trying to Fight AI-Generated Fake Video

“The secret of life is honesty and fair dealing. If you can fake that, you’ve got it made.” – Groucho Marx

Deepfake video is here, now. With it comes the relatively easy ability to make anyone say anything you like on video. Post that video on the Internet and you have a very powerful way to disseminate credible disinformation to the world. The technology uses facial mapping and artificial intelligence to create realistic videos—so real that it’s virtually impossible to spot the fakes.

The name “deepfake” is a portmanteau of AI-powered “deep learning” and “fake.” Apparently, it surfaced in 2017 on Reddit. It’s already been used (or misused) to create pornographic videos of famous movie stars that have never starred in such films. It would not surprise me to see this technology misused in the US mid-term elections this fall. Certainly, we’ll be seeing more deepfake video—a lot more—by the time the next US presidential election rolls around in 2020.

The deepfake technology (and the companion, downloadable desktop program FakeApp) starts with an extensive set of facial images taken of the person being targeted. These images are ridiculously easy to obtain for famous people—politicians and media stars—because they’re always appearing in front of a camera. Capture enough of these facial expressions, tie them to the phonemes being spoken during the capture (crystal clear sound conveniently recorded and synchronized by HD video recording technology), and use them to map new spoken words onto existing video. In addition, you can now get anyone to star in the raunchiest adult flick imaginable.

Want proof? Here’s one of my favorite Weird Al videos, “Perform This Way,” inspired by and a parody of Lady Gaga’s hit song and video “Born this Way.” In his video parody, Weird Al’s face has been electronically grafted onto the bodies of dancer Vlada Gorbaneva and contortionist Marissa Heart. It’s an obvious fake, but it’s close, even though the video was recorded in 2011. You want better video fakery? Actor and comedian Jordan Peele along with BuzzFeed used deepfake technology in 2017 to create this video of President Obama saying things you know he wouldn’t, even with the aid of his fictitious anger translator Luther ( played by Jordan Peele’s performing partner Keegan-Michael Key).

The Weird Al video, made without the aid of deepfake technology, firmly resides in the uncanny valley , but BuzzFeed’s much newer Obama video made with deepfake technology and Jordan Peele’s mouth comes a lot closer to looking real. The technology will do nothing but improve from here.

Deepfake videos have now appeared on YouTube and Vimeo, and they have been banned by several sites. Heck, even Pornhub.com has banned deepfake video! But how can people know if the video is real or fake? It’s not going to be easy.

People have been creating fake, Photoshop-modified images for years. One common use for this sort of technology is to make fashion models look thinner—often impossibly thin. In 2009, an infamous Ralph Lauren ad electronically shrunk the waist and torso of fashion model Filippa Hamilton so that her head appeared to be bigger than her pelvis. A lot bigger. All sorts of women’s body parts get modified for print and online ads to achieve someone’s beauty ideal. There are artists who have become enormously skilled at such fakery: melting the fat from bodies, removing blemishes, erasing dark circles under eyes, and repairing hairlines all using photo-editing programs.

But video has been a much tougher nut to crack simply because of the sheer number of images that need to be retouched: 25, 30, or even 60 per second. That’s not to say this fake-video technology has not been predicted for quite a while.

For example, Michael Crichton’s 1992 novel “Rising Sun” described a murder that took place at the fictional Nakamoto Corporation in its equally fictitious US headquarters in Los Angeles. (At least LA is real, sort of.) The key piece of evidence in Crighton’s story was a recording of the murder taken by a security camera—a faked video that had been produced in mere hours. The culprit was caught only because a reflective object also appearing in the video had captured the actual murder and by zooming in on that unmodified object, the true scene could be retrieved. The MacGuffin in this story was that the Japanese security cameras had such tremendous resolution that the tiny reflected image could be magnified while staying usable. I remember reading this novel on a plane trip to Japan. It was a terrific novel that later became an absorbing, commercially successful movie starring Sean Connery and Wesley Snipes.

Well, the fake-video technology portrayed in “Rising Sun” is no longer science fiction. Just 25 years later, it’s real; it’s automated; and it’s powered by AI. The question is, what will we now do about it?

We don’t really have a choice. We must do something about deepfake video technology. With the torrent of video poured down our throats daily, losing the ability to know real from fake is going to make what happened in the last US presidential election with text-based social media look like finger painting in Kindergarten. It’s going to make a hash out of international politics and war coverage.

So what are we doing?

The US Department of Defense is currently funding a project in an attempt to determine whether AI-based deepfake video and audio might soon be impossible to distinguish from the real thing—even using an AI-based detection scheme. The military is mighty interested in being able to detect fake video. It’s clearly a matter of national defense, among other things.

DARPA is conducting a contest this month aimed at detecting deepfake videos. Ten university teams from the US and Europe will compete in a two-week contest. They’re attempting to develop techniques to distinguish between real and AI-generated fake videos.

The contest is a part of DARPA’s Media Forensics (MediaFor) program, which is attempting “to level the digital imagery playing field… by developing technologies for the automated assessment of the integrity of an image or video.” DARPA’s MediFor program began soliciting research applications in 2015, launched in 2016, and is currently funded through 2020. By some accounts, the research is already bearing fruit.

However, analyzing video using AI to ascertain its veracity seems like the long way around the problem. In many ways, I think, this is the same problem that we have with the anonymity conferred on users by the Internet in general. Fake news, fake social posts, fake photos, fake audio, and fake video all seem part of a Webby continuum to me.

This is not an unsolvable problem. Societies have solved this type of problem before. Fakery has been solved for thousands of years for financial transactions. Letters of credit, used extensively today for international finance, may date all the way back to ancient Egypt and Babylon. The University Museum of Philadelphia has a clay promissory note dating from around 3000 BC from Babylon. Letters of credit were used in the 1300s and 1400s by the Medici Bank in Europe, and their use continues to this day.

The worldwide banking system is now fully electronic and relies on numerous safeguards to certify financial transactions. It’s not impossible to break the security of this system. It’s just really hard.

And in the latest extension to this historically long line of instruments used to protect financial transactions, we have Bitcoin and the hundreds of follow-on cryptocurrencies, which are all based on blockchain technology with ledger systems distributed across server systems located throughout the cloud. Supposedly, there’s safety in numbers, although it seems to me that a lot of Bitcoin has been stolen despite the safeguards. We’ve got more to learn here.

It’s clear to me that we’re going to need to start using similar certification technology for videos, with verification and certification built into some future video-encoding standard (perhaps based on blockchain technology) and built into every video player based on that future standard, because it won’t be long before we won’t be able to tell the real from the forgeries any other way, and neither will our AI overlords.

One thought on “Deepfake Video is Here. Reality is Fleeting.”

Leave a Reply

featured blogs
Mar 27, 2024
The current state of PCB design is in the middle of a trifecta; there's an evolution, a revolution, and an exodus. There are better tools and material changes, there's the addition of artificial intelligence and machine learning (AI/ML), but at the same time, people are leavi...
Mar 26, 2024
Learn how GPU acceleration impacts digital chip design implementation, expanding beyond chip simulation to fulfill compute demands of the RTL-to-GDSII process.The post Can GPUs Accelerate Digital Design Implementation? appeared first on Chip Design....
Mar 21, 2024
The awesome thing about these machines is that you are limited only by your imagination, and I've got a GREAT imagination....

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured chalk talk

The Future of Intelligent Devices is Here
Sponsored by Alif Semiconductor
In this episode of Chalk Talk, Amelia Dalton and Henrik Flodell from Alif Semiconductor explore the what, where, and how of Alif’s Ensemble 32-bit microcontrollers and fusion processors. They examine the autonomous intelligent power management, high on-chip integration and isolated security subsystem aspects of these 32-bit microcontrollers and fusion processors, the role that scalability plays in this processor family, and how you can utilize them for your next embedded design.
Aug 9, 2023
27,635 views