The signs of a deeply fake image are easy to spot, but generative AI is asking us about everything we see and hear today. With each new AI model released, the signs of a fake image are decreasing, and to add to the confusion, you can now create immersive videos, voice clones of your loved ones, and create fake articles in seconds.
To avoid being fooled by AI deepfakes, it’s worth knowing what kind of risk they pose.
The Evolution of Deepfakes
A deepfake shows someone doing something that has never happened in real life. This is completely fake. We laugh at deepfakes when they are shared on the internet as a meme or joke, but very few people find it funny when they are used to misleading us.
In the past, deepfakes were created by taking an existing photo and modifying it in an image editing software such as Photoshop. But what sets an AI deepfake apart is that it can be created from scratch using deep learning algorithms.
The Merriam-Webster dictionary defines a deep fake as:
A picture or recording that is convincingly altered and manipulated to misrepresent a person as doing or saying something that was not actually said or done.
But with advances in AI technology, this definition is starting to look outdated. Using AI tools, deepfakes now include images, text, video, and voice cloning. Sometimes, all four AI generation modes are used at once.
Because it’s an automated process that’s incredibly fast and cheap to use, it’s the perfect tool for creating deepfakes at a rate we’ve never seen before—all without having to know a thing about on how to edit photos, videos, or audio. .
The Great Danger of AI Deepfakes
A host of AI video generators are available, along with many AI voice generators. Throw in a big language model like GPT-4 and you have a recipe for creating the most believable deepfakes we’ve seen in modern history so far.
Knowing the different types of AI deepfakes, and how they can be used to trick you, is one way to avoid being misled. Here are some serious examples of how AI deepfake technology poses a real threat.
1. AI Identity Theft
You’ve probably seen them. Among the first truly viral AI deepfakes that spread around the world were an image of Donald Trump being arrested, and one of Pope Francis in a white puffer jacket.
While one may seem like an innocent re-imagining of what a famous religious figure might wear on a cold day in Rome; the other image, which shows a political figure in a serious situation of the law, has more serious consequences if considered real.
Currently, people are mainly targeting celebrities, politicians, and other famous individuals to create AI deepfakes. In part, this is because famous individuals have a lot of photos of them on the internet which probably helped train the model in the first place.
In the case of an AI image generator like Midjourney—which was used to deepfake Trump and the Pope—the user only needs to input text describing what they want to see. Keywords can be used to determine the style of art, such as a photo or photorealism, and the results can be improved by increasing the resolution.
You can easily learn to use Midjourney and try it yourself, but for obvious moral and legal reasons, you should avoid posting these images publicly.
Unfortunately, an average, non-famous person cannot guarantee that you will be safe from deep AIs.
The problem lies in a key feature that AI image generators offer: the ability to upload your own image and manipulate it with AI. And a tool like Outpainting in DALL-E 2 can extend an existing image beyond its boundaries by inputting a text prompt and describing what else you want to create.
If someone else does this to your photos, the dangers will be greater than the deep fake of the Pope in a white jacket – they can use it anywhere, pretending to be you. While most people generally use AI with good intentions, there are very few restrictions that prevent people from using it to do harm, especially in cases of identity theft.
2. Deepfake Voice Clone Scams
With the help of AI, deepfakes cross a line most of us are not ready for: fake voice clones. With a small amount of original audio—perhaps from a TikTok video you’ve posted before, or a YouTube video you’ve uploaded—an AI model can reproduce your one-and-only that voice.
It’s strange and scary to imagine receiving a phone call that sounds like a family member, friend, or colleague. Deepfake voice clones are a serious enough concern that the Federal Trade Commision (FTC) has issued a warning about it.
Don’t trust the voice. Call the person who allegedly contacted you and verify the story. Use a phone number that you know is theirs. If you cannot contact your loved one, try to contact them through another family member or their friends.
The Washington Post reported a case of a couple in their 70s who received a phone call from someone who looked like their grandson. He is in jail and urgently needs money for bail. Since there was no other reason to suspect who they were talking to, they continued and handed the money to the scammer.
It’s not just the older generation that’s at risk either, The Guardian reports another example of a bank manager approving a $35 million dollar transaction after a series of “deep-faked call” from someone they believed to be a bank director.
3. Mass-Produced Fake News
Big language models, like ChatGPT, are very good at making text look human, and we currently don’t have effective tools to tell the difference. In the wrong hands, fake news and conspiracy theories can be cheap to manufacture and take much longer to debunk.
The spread of misinformation is not new of course, but a research paper published on arXiv in January 2023 explains that the problem lies in how easy it is to increase output using AI tools. They call it “AI-generated influence campaigns”, which they say could, for example, be used by politicians to outsource their political campaigns.
Combining more than one AI-generated source creates a high-level deepfake. As an example, an AI model can generate a well-written and convincing news story to accompany the fake image of Donald Trump being arrested. This provides more legitimacy than if the image is shared on its own.
Fake news isn’t limited to images and text either, advances in AI video generation mean we’re seeing more in-depth fake videos develop. Here is one of Robert Downey Jr. which was grafted onto a video by Elon Musk, posted on the YouTube channel Deepfakery.
Creating a deepfake can be as simple as downloading an app. You can use an app like TokkingHeads to turn images into animated avatars, allowing you to upload your own image and audio to make it sound like the person speaking.
For the most part, it’s fun and entertaining, but there’s also the potential for trouble. It shows us how easy it is to use the image of anyone as if the person were uttering words they did not say.
Don’t Be Fooled by an AI Deepfake
Deepfakes can be rapidly deployed at very little cost and with little skill bar or computing power required. They can take the form of a generated image, a voice clone, or a combination of AI-generated images, audio, and text.
It used to be much more difficult and labor intensive to create a deep fake, but now, with so many AI apps out there, anyone has access to the tools used to create deep fakes. As AI deepfake technology grows more advanced, it’s worth keeping a close eye on the dangers it poses.