feature article
Subscribe Now

Deepfake Video is Here. Reality is Fleeting.

Even the DoD is Trying to Fight AI-Generated Fake Video

“The secret of life is honesty and fair dealing. If you can fake that, you’ve got it made.” – Groucho Marx

Deepfake video is here, now. With it comes the relatively easy ability to make anyone say anything you like on video. Post that video on the Internet and you have a very powerful way to disseminate credible disinformation to the world. The technology uses facial mapping and artificial intelligence to create realistic videos—so real that it’s virtually impossible to spot the fakes.

The name “deepfake” is a portmanteau of AI-powered “deep learning” and “fake.” Apparently, it surfaced in 2017 on Reddit. It’s already been used (or misused) to create pornographic videos of famous movie stars that have never starred in such films. It would not surprise me to see this technology misused in the US mid-term elections this fall. Certainly, we’ll be seeing more deepfake video—a lot more—by the time the next US presidential election rolls around in 2020.

The deepfake technology (and the companion, downloadable desktop program FakeApp) starts with an extensive set of facial images taken of the person being targeted. These images are ridiculously easy to obtain for famous people—politicians and media stars—because they’re always appearing in front of a camera. Capture enough of these facial expressions, tie them to the phonemes being spoken during the capture (crystal clear sound conveniently recorded and synchronized by HD video recording technology), and use them to map new spoken words onto existing video. In addition, you can now get anyone to star in the raunchiest adult flick imaginable.

Want proof? Here’s one of my favorite Weird Al videos, “Perform This Way,” inspired by and a parody of Lady Gaga’s hit song and video “Born this Way.” In his video parody, Weird Al’s face has been electronically grafted onto the bodies of dancer Vlada Gorbaneva and contortionist Marissa Heart. It’s an obvious fake, but it’s close, even though the video was recorded in 2011. You want better video fakery? Actor and comedian Jordan Peele along with BuzzFeed used deepfake technology in 2017 to create this video of President Obama saying things you know he wouldn’t, even with the aid of his fictitious anger translator Luther ( played by Jordan Peele’s performing partner Keegan-Michael Key).

The Weird Al video, made without the aid of deepfake technology, firmly resides in the uncanny valley , but BuzzFeed’s much newer Obama video made with deepfake technology and Jordan Peele’s mouth comes a lot closer to looking real. The technology will do nothing but improve from here.

Deepfake videos have now appeared on YouTube and Vimeo, and they have been banned by several sites. Heck, even Pornhub.com has banned deepfake video! But how can people know if the video is real or fake? It’s not going to be easy.

People have been creating fake, Photoshop-modified images for years. One common use for this sort of technology is to make fashion models look thinner—often impossibly thin. In 2009, an infamous Ralph Lauren ad electronically shrunk the waist and torso of fashion model Filippa Hamilton so that her head appeared to be bigger than her pelvis. A lot bigger. All sorts of women’s body parts get modified for print and online ads to achieve someone’s beauty ideal. There are artists who have become enormously skilled at such fakery: melting the fat from bodies, removing blemishes, erasing dark circles under eyes, and repairing hairlines all using photo-editing programs.

But video has been a much tougher nut to crack simply because of the sheer number of images that need to be retouched: 25, 30, or even 60 per second. That’s not to say this fake-video technology has not been predicted for quite a while.

For example, Michael Crichton’s 1992 novel “Rising Sun” described a murder that took place at the fictional Nakamoto Corporation in its equally fictitious US headquarters in Los Angeles. (At least LA is real, sort of.) The key piece of evidence in Crighton’s story was a recording of the murder taken by a security camera—a faked video that had been produced in mere hours. The culprit was caught only because a reflective object also appearing in the video had captured the actual murder and by zooming in on that unmodified object, the true scene could be retrieved. The MacGuffin in this story was that the Japanese security cameras had such tremendous resolution that the tiny reflected image could be magnified while staying usable. I remember reading this novel on a plane trip to Japan. It was a terrific novel that later became an absorbing, commercially successful movie starring Sean Connery and Wesley Snipes.

Well, the fake-video technology portrayed in “Rising Sun” is no longer science fiction. Just 25 years later, it’s real; it’s automated; and it’s powered by AI. The question is, what will we now do about it?

We don’t really have a choice. We must do something about deepfake video technology. With the torrent of video poured down our throats daily, losing the ability to know real from fake is going to make what happened in the last US presidential election with text-based social media look like finger painting in Kindergarten. It’s going to make a hash out of international politics and war coverage.

So what are we doing?

The US Department of Defense is currently funding a project in an attempt to determine whether AI-based deepfake video and audio might soon be impossible to distinguish from the real thing—even using an AI-based detection scheme. The military is mighty interested in being able to detect fake video. It’s clearly a matter of national defense, among other things.

DARPA is conducting a contest this month aimed at detecting deepfake videos. Ten university teams from the US and Europe will compete in a two-week contest. They’re attempting to develop techniques to distinguish between real and AI-generated fake videos.

The contest is a part of DARPA’s Media Forensics (MediaFor) program, which is attempting “to level the digital imagery playing field… by developing technologies for the automated assessment of the integrity of an image or video.” DARPA’s MediFor program began soliciting research applications in 2015, launched in 2016, and is currently funded through 2020. By some accounts, the research is already bearing fruit.

However, analyzing video using AI to ascertain its veracity seems like the long way around the problem. In many ways, I think, this is the same problem that we have with the anonymity conferred on users by the Internet in general. Fake news, fake social posts, fake photos, fake audio, and fake video all seem part of a Webby continuum to me.

This is not an unsolvable problem. Societies have solved this type of problem before. Fakery has been solved for thousands of years for financial transactions. Letters of credit, used extensively today for international finance, may date all the way back to ancient Egypt and Babylon. The University Museum of Philadelphia has a clay promissory note dating from around 3000 BC from Babylon. Letters of credit were used in the 1300s and 1400s by the Medici Bank in Europe, and their use continues to this day.

The worldwide banking system is now fully electronic and relies on numerous safeguards to certify financial transactions. It’s not impossible to break the security of this system. It’s just really hard.

And in the latest extension to this historically long line of instruments used to protect financial transactions, we have Bitcoin and the hundreds of follow-on cryptocurrencies, which are all based on blockchain technology with ledger systems distributed across server systems located throughout the cloud. Supposedly, there’s safety in numbers, although it seems to me that a lot of Bitcoin has been stolen despite the safeguards. We’ve got more to learn here.

It’s clear to me that we’re going to need to start using similar certification technology for videos, with verification and certification built into some future video-encoding standard (perhaps based on blockchain technology) and built into every video player based on that future standard, because it won’t be long before we won’t be able to tell the real from the forgeries any other way, and neither will our AI overlords.

One thought on “Deepfake Video is Here. Reality is Fleeting.”

Leave a Reply

featured blogs
Sep 28, 2022
You might think that hearing aids are a bit of a sleepy backwater. Indeed, the only time I can remember coming across them in my job at Cadence was at a CadenceLIVE Europe presentation that I never blogged about, or if I did, it was such a passing reference that Google cannot...
Sep 22, 2022
On Monday 26 September 2022, Earth and Jupiter will be only 365 million miles apart, which is around half of their worst-case separation....
Sep 22, 2022
Learn how to design safe and stylish interior and exterior automotive lighting systems with a look at important lighting categories and lighting design tools. The post How to Design Safe, Appealing, Functional Automotive Lighting Systems appeared first on From Silicon To Sof...

featured video

Embracing Photonics and Fiber Optics in Aerospace and Defense Applications

Sponsored by Synopsys

We sat down with Jigesh Patel, Technical Marketing Manager of Photonic Solutions at Synopsys, to learn the challenges and benefits of using photonics in Aerospace and Defense systems.

Read the Interview online

featured paper

Algorithm Verification with FPGAs and ASICs

Sponsored by MathWorks

Developing new FPGA and ASIC designs involves implementing new algorithms, which presents challenges for verification for algorithm developers, hardware designers, and verification engineers. This eBook explores different aspects of hardware design verification and how you can use MATLAB and Simulink to reduce development effort and improve the quality of end products.

Click here to read more

featured chalk talk

Introducing Vivado ML Editions

Sponsored by Xilinx

There are many ways that machine learning can help improve our IC designs, but when it comes to quality of results and design iteration - it’s a game changer. In this episode of Chalk Talk, Amelia Dalton chats with Nick Ni from Xilinx about the benefits of machine learning design optimization, what hierarchical module-based compilation brings to the table, and why extending a module design into an end-to-end flow can make all the difference in your next IC design.

Click here for more information about Vivado ML