April 4, 2023
Deepfakes: Something to Fear?
Garrett Burnett

What is a Deepfake?
Deepfakes are AI generated videos or voice recordings that replace one’s likeness with that of another. They have been used to mock actors, businesspeople, and politicians among others. According to a Business Insider article, “the most common [method for creating deepfakes] relies on the use of deep neural networks involving autoencoders that employ a face-swapping technique.” Although this sounds complicated, all you need to get started is a small collection of video samples from multiple angles and some recordings. This makes it incredibly easy to spoof celebrities and politicians as they are frequently on camera and talking.
With all these sources for the AI to build from, deepfakes are becoming hyper realistic. Additionally, there are numerous online platforms that allow you to create deepfakes for free, so it is also becoming easier. You could harmlessly act as your friend for a joke, but you could also create a deepfake of the Ukrainian president and surrender to Russia (this has been done).
People have valid reasons to be afraid of technology like this. No one wants to be portrayed saying things that they have never or would never say. However, according to Maddy Meyers’ Polygon article about deepfake technology, “if you wanted to prevent a deepfake, you’d need to delete every single visual and auditory record of your existence,” (Meyers). This is just unrealistic for most. In the age of social media, our lives are being documented constantly, so it would be difficult to scrub ourselves from the Internet completely. The threat of deepfakes adds a whole new dimension to the potential implications of the digital footprint.
Political Misinformation

A bill from Washington state in the United States was recently passed by the State Senate to allows for victims of deepfake videos to seek injunctive or equitable relief (i.e. financial compensation). This bill passed the Senate with a 35–15 margin, but those in opposition highlighted the very broad language constituting what kind of media was “synthetic” in nature. This bill works primarily in the electoral space, as it protects candidates for political office from deepfakes of themselves saying or doing something that they, in reality, did not.
The bill defines the synthetic media used in these circumstances as “an image, an audio recording, or a video recording of an individual’s appearance, speech, or conduct that has been intentionally manipulated with the use of generative adversarial network techniques or other digital technology in a manner to create a realistic but false image, audio, or visual that produces a depiction that to a reasonable individual is of a real individual in appearance, action, or speech that did not actually occur in reality.” The Senators who opposed the bill noted that the current wording of the bill left candidates vulnerable to suit if they applied even a filter.
Overall, experts expect more global legislation targeting deepfake technology as it becomes more advanced, and it becomes increasingly difficult to spot the differences between real audio and video versus the deepfakes.
Voice Deepfakes

Voice deepfakes are frightening.
According to a recent Wired article, “voice-impersonation cons work best when the target is caught up in a sense of urgency and just trying to help someone or complete a task they believe I their responsibility.” Despite the advancements in deepfake voice impersonations, the person being conned must still believe that the individual calling and requesting the money is who they actually say they are and that the individual would be calling them specifically for the requested amount of money. As I am sure you know, this type of scam is most often successful with the elderly who believe that their grandchild has been injured or imprisoned or some other type of urgent situation. My own grandmother has been on the receiving end of these calls before, but she has thankfully been aware enough to know to hang up the phone and report the call to the proper authorities. Although, not everyone has been as lucky as my grandmother, and this type of voice-impersonation con has been happening for decades.
Most people are familiar with the voice-impersonation con, but less are aware of the same con that is implementing AI technology to successfully spoof the voice of the grandchild or other relative calling. Imagine answering the call and you hear the exact voice of someone you know. This con would be far more convincing to me than someone random calling and claiming to need assistance.
Lastly, there is a real and present fear that these voice-impersonation cons will further advance to use the aforementioned AI technology to create perfectly spoofed voice then couple it with language models to create conversation based on the individual’s verbal response over the phone. This advancement would allow the individuals running the con to run a large number of cons simultaneously. No longer would they need to stay on the phone in an attempt to get their cash themselves. Scammers would be able to press play and let the cons run non-stop.
The only thing preventing this wide scale con now is the difficulty in producing quality voice deepfakes. According to the Wired article, “the technology to create convincing, robust voice deepfakes is powerful and increasingly prevalent in controlled settings or situations where extensive recording of a person’s voice are available.” Thankfully most of us do not have a large amount of our recorded voice stored away for AI programs to generate perfectly convincing voice deepfakes, but, as the technology progresses, con artists will need less and less voice samples to create those deepfakes.
