As fake news reports become a pressing issue throughout the world, only 26% of Americans believe that they can tell the truth from the lie. Furthermore, due to recent technological advancements, it now becomes more difficult than ever to recognize fake news. For example, the algorithm that was recently developed by engineers at Stanford University allows a regular user to twist the words of politicians, TV anchors, and other people. While this technology was initially created for making smart film edits, there is widespread concern that it will be used for manipulations.
Why Deepfake Algorithm Was Created
It often happens that video editors need to edit footage when voice-over was already added. To save time and money, they may use dedicated AI-based software that will make such edits virtually unnoticeable. These days, the developers of Movavi and other video editing software adopt AI tools. However, there is a growing concern that they may be used for spreading propaganda.
The Danger of Deepfakes
While programs based on deepfake technology are still not advanced enough to create believable fakes, they are already used for creating manipulative content. For example, malicious users use such software to alter their appearance and call politicians, trying to make them disclose classified info. The creators of such videos may also make a politician look intoxicated, thus affecting their chances for re-election.
To minimize the risks, lawmakers proposed to add watermarks to such videos to distinguish them from unedited footage. However, the working solutions haven’t been implemented yet.
- A deepfake of Tom Cruise turned out to be a TikTok darling. He doesn’t do anything sinister in his videos; however, many people find the appearance of his clone disconcerting.
- In 2019, a group of people used a deepfake of Mark Zuckerberg to learn private information about him. It became possible thanks to the algorithm that allows anyone to copy the voice and facial expressions of another person.
- One of the most artistic examples of the use of this technology was a deepfake by Salvador Dali, created by the GS&P agency. The creators used old video interviews and quotes of the artist to create a compelling image of him.
- However, not all deepfakes are created for entertainment purposes only. In 2022, a deepfake of Volodymyr Zelensky, the President of Ukraine, ordered his troops to lay down their weapons. While the quality of the video was rather poor, many people are wondering whether they will be able to recognize a deepfake when the technology improves.
How Realistic Are Modern Deepfake Videos?
High-quality deepfake videos reach their goal 60% of the time. It means that most people who watch them initially believe that they are real. If a video is short, it’s more difficult to notice that it was edited.
To create a believable deepfake, you need to find a video of a person speaking for about 40 minutes. Then, it will allow the AI-based algorithm to produce a realistic video.
We got used to seeing heavily edited images in advertising and on the pages of glossy magazines. However, the creators of deepfakes believe that such videos will also become a part of our everyday lives. This is why it becomes more important than ever to learn how to recognize them for what they are. It will ensure that they won’t impact our political decisions and the way we see some public figures.
Deepfake technology is based on an algorithm that requires further development. It can’t be used for creating fully-realistic deepfakes yet, but it might subtly change our perception of people and events. As more and more people start to believe in fake news and conspiracy theories, it becomes more important than ever to set clear rules for the developers of such software. When watching a deepfake video, everyone should be aware of its origin. Modern society needs to have proof of authenticity to tell fake content from unedited footage.