Deepfakes: Well, that's gonna be a sh*t show!
I am not a fan but they were bound to become a thing.
Brace yourself. Deepfakes are coming!
Deepfake technology allows anyone to alter the appearance and sound of an individual so that they appear to be saying or doing something on camera that is entirely artificially created.
A couple of weeks ago, I shared a newsletter on posthumous privacy and cited the case of Anthony Bourdain and the use of his AI-altered voice. In it, I also talked about the use of holographic technology by Kanye West to bring Kim Kardashian’s father “back to life.” What I did not share in detail, however, is how this was made possible and how it may affect life as we know it now and in the near future.
How Deepfakes Started
The first version of deep fake technology was first seen in a project known as the ‘Video Rewrite Program’, published in 1997. The team modified existing video footage of a person speaking to depict the person mouthing words contained in different audio.
After that, the ball started rolling, and around the end of 2017, a Reddit user named "deepfakes" (along with others) shared videos of celebrities’ faces swapped onto the bodies of actresses on pornographic videos and other videos.
From then on, we have seen many examples of deepfake videos, some of them shared in the video below.
Making celebrities, like Kim Kardashian, say something they aren’t actually saying may not have a major impact on the world.
However, the possible (and very likely) misuse of deepfakes in political campaigns could have horrendous implications for society at large.
The Dangers of Deepfake Technology
This is by far has to be the biggest threat. In an era where anyone can access such technology and use it however they want, it would be easy to put something in someone’s mouth - quite literally. A leader’s video can be altered to pass a wrong message (as it has already been done in the past).
If someone decides to utilize another person’s footage and ‘deep fake’ it, there are limited ways in which they can be stopped. While the instance of deepfake technology usage can be cleared up eventually, the damage to an individual’s reputation or business can already be done.
An increase in fraudulent activities
In 2019, a U.K.-based energy firm's CEO was scammed over the phone when he was ordered to transfer €220,000 into a Hungarian bank account by an individual who used audio deepfake technology to impersonate the voice of the firm's parent company's chief executive. That in itself is a real threat to many.
The “it was a deepfake excuse”
Corrupt politicians and business leaders caught red-handed on tape will be able to claim the recordings aren’t real by stating that the footage or voice recording has been doctored by the use of deepfake technology.
How to Protect Yourself From Deepfake Technology
Microsoft has been testing software that will be used in spotting deepfakes. But, before this tool has been made public, here are some tips that you can use to stay safe.
Be watchful of tell-tale signs such as bad lip-synching or patchy skin tones in videos. There can be flickering around the edges of transposed faces. And fine details, such as hair, are particularly hard for deepfakes to render well, especially where strands are visible on the fringe. Badly rendered jewelry and teeth can also be a giveaway, as can strange lighting effects, such as inconsistent illumination and reflections on the iris.
Reduce the number of times you share your voice and videos. Be very mindful of what you choose to share, as it can be used against you.
Do not create deepfake videos. If you create a deepfake of yourself, you shouldn’t be surprised if parts of it will be used against you.
Stay safe out there & don’t let the deepfakes get you. 😉
Peace, love & anarchy,