Nancy Pelosi drunkenly slurred her words in a public address. Facebook’s Mark Zuckerberg gleefully asserted that whoever has the data will rule the world. Carrie Fisher finished her role as Princess Leia in Rogue One: A Star Wars Story…after she died. These are all examples of deepfake videos.
Deepfake videos, created by artificial intelligence (AI), make people appear to say or do something they, in fact, did not do. On the good side, if an actor flubs their lines, the film company could use deepfake technology to fix it. On the bad side, someone with nefarious intentions could map one person’s face onto another’s to create porn, as was the case with actress Scarlett Johansson.
The word deepfake is a blending of “deep learning” and “fake.”
Deepfake audio replicates someone’s voice. This is amazing for someone who has lost their voice from cancer, for example. However, a company in the UK lost the equivalent of $220,000 when “the boss” called and demanded a transfer of money. And that’s not the only instance of criminal uses of deepfakes.
What is it, and what’s the big deal now?
The word deepfake is a blending of “deep learning” and “fake.” Deep learning is a subset of machine learning, which uses algorithms to detect and predict patterns. Deep learning learns from vast amounts of unstructured data that would normally take humans decades to understand and process (Investopedia).
The technology was expensive and exclusive a few years ago. However, the concern now is growing ease of access with things like the free, downloadable FakeApp that gives deepfakes code scripts a user-friendly front end.
This article originally appeared in GRAND Magazine https://www.grandmagazine.com/2019/11/deep-fakes-are-here/
What that means in practical terms is that anyone with a computer that has a good graphics card and a lot of memory could feed 400 pictures of someone, gleaned from video, social media and photographs, into the app. The app creates a virtual model of that person’s face and all their mannerisms that are then mapped onto someone else’s head in full-motion video. Likewise, deepfake audio can be created by gathering samples of someone’s voice. In the case of CEOs, it might come from earnings reports, YouTube Videos, news interviews or other recordings.
Jordan Peele uses AI, President Obama in fake news PSA
How can you tell the difference?
Actor Jordan Peele created a deepfake video as former President Obama saying things that Obama would never say in public. Peele did it as a Public Service Announcement, co-created with Buzzfeed, to warn about the danger of deepfake videos, especially with the political election season heating up.
Last June the Washington Post reported that deepfakes were coming and tech experts were unprepared: “Top artificial-intelligence researchers across the country are racing to defuse an extraordinary political weapon: computer-generated fake videos that could undermine candidates and mislead voters during the 2020 presidential campaign. And they have a message: We’re not ready.”
That said, when has the news ever reported something followed by “And it’s no big deal”?
What is known, at least for now, is that the bulk of deepfake videos have been porn, and the most concrete tactic to discern fake videos to date is to look for blinking. Since photos usually only have eyes open, it’s hard to make someone look like they are blinking naturally.
There are, however, some things you can do, now that you know you can’t trust everything you see and hear.
1. What’s the secret word?
The “grandpa I’m in Montreal-it’s an emergency-don’t tell my parents and send me $5,000” is back with a vengeance. If in the past it was “an actor” mimicking your grandchild’s voice, now, with deepfake audio the voice on the other end of the line could literally be your grandchild’s voice. And if they post photos of their travels on social media while they are still traveling (of course tell them not to!), a scammer could actually know where they are, making the scam even more convincing.
Everyone needs to plan for a scene like a phone call in the Terminator II: Judgement Day. “Why is Wolfie barking?” the Arnold asks, knowing the dog’s name is really Max. “Wolfie’s fine,” the deepfake answers, letting Arnold and young John know that, even though the voice sounds real, it’s an imposter.
David Klass, a grandparent in Hindsdale, IL, suggests creating a different code word for each grandchild. Kind of like spies in the 1950s using a very specific string of phrases to identify one another. That approach could work for CEO’s and their finance managers—you must each say a secret word before transferring large amounts of money.
2. Step up your due diligence.
Check multiple sources, especially as we move into the 2020 election. Even your nightly news is no longer the definitive end all/be all. Rumors and deepfakes may start on social media, but they can jump to “real news” if repeated by an unsuspecting reporter who didn’t check their resource. The good news is, most deepfakes come out as such quickly. Check different news sources. Look for the source of the video. If something seems unbelievable, it very well could be. Double-check.
3. Warn your children and grandchildren to not share too much.
Depending on their age, your grandchildren may scoff at your warnings, but it’s still a good idea to suggest they not post so many photos of themselves online. Nothing posted online can ever be guaranteed to be private. Social media sites change their privacy settings all the time. And there is nothing to prevent a friend or family member from re-sharing something once posted.
4. Watch more fake videos.
Watching some of the readily available videos using deepfake technology is a way to hone your skills at detection. Can you tell the difference? Something about the eyes? A quick profile view of the face? Start here with the “Top 10 Deepfake Videos” from Watchmojo.com on YouTube.