Emerson College’s only independent, student-run newspaper since 1947

The Berkeley Beacon

Emerson College’s only independent, student-run newspaper since 1947

The Berkeley Beacon

Emerson College’s only independent, student-run newspaper since 1947

The Berkeley Beacon

Op-ed: Deepfakes—a newer, more dangerous era of fake news

One+of+the+most+concerning+aspects+regarding+deepfakes+in+the+2020+election+is+how+accessible+the+software+is%2C+especially+to+the+wrong+people.+%2F+Illustration+by+Ally+Rzesa
One of the most concerning aspects regarding deepfakes in the 2020 election is how accessible the software is, especially to the wrong people. / Illustration by Ally Rzesa

With just over a year and a half until the 2020 presidential election, a new form of fake news is growing more prominent and dangerous than what we experienced in the months leading up to the 2016 election. Known as ‘deepfakes,’ these digitally manipulated and edited videos that typically appear on social media outlets depict high profile individuals saying or doing things they never actually did.

Deepfakes have been used recently in internet memes and even films. In the last scene of the latest Star Wars movie, Rogue One, a young Princess Leia speaks to a fellow resistance member even though actor Carrie Fisher died before finishing the movie. In the 1994 film Forrest Gump, producers digitally manipulated footage of former President John F. Kennedy speaking directly to Forrest Gump in the Oval Office.

But as we’ve learned time and time again, when a new technology elicits positive breakthroughs and usefulness, people will attempt to exploit it through detrimental means.

There is no doubt in my mind that deepfakes will arise as an issue in the 2020 election. Creating a video that seamlessly shows a candidate doing or saying something they never did—to the point where its validity or invalidity appears indistinguishable to the human eye—seems like any internet troll’s dream.

According to an article in The Guardian, the Belgian social-democratic party Socialistische Parij Anders posted a deepfake of President Trump offering advice to the people of Belgium on Twitter and Facebook in May 2018. In the video, the president states directly into the camera, “as you know, I had the balls to withdraw from the Paris Climate Agreement, and so should you.” The video received approximately 92,000 views and 525 reactions—78 being “angry” reacts—on Facebook.

While the video clearly looks fake and “Trump” even states at the end of the video, “we all know climate change is fake, just like this video,” many still believed in its message and wrote hundreds of comments expressing their anger and outrage toward the president.

One of the most concerning aspects regarding deepfakes in the 2020 election is how accessible the software is, especially to the wrong people. Until late 2017, the artificial intelligence research community primarily used deepfake technology. But now any internet troll with computer science knowledge can create deepfakes thanks to a Reddit user under the moniker “Deepfake” who used TensorFlow, Google’s free, open source, machine-learning software, to post digitally altered pornographic videos of celebrities online, thus opening the floodgates of deepfakes to the general public.

While deepfakes remain out of sight in the 2020 campaign trail thus far, it is only a matter of time until we see one surface of Kamala Harris or Beto O’Rourke spouting offensive nonsense, or even engaging in an incriminating act that they never did.

Many are wondering how to educate the public on deepfakes. According to a CNN article published this year, deepfake technology is on the U.S. government’s radar. The Pentagon started working with some of the country’s biggest research institutions such as the University of Colorado in Denver and SRI International—an American nonprofit scientific research institute—through the Defense Advanced Research Projects Agency to figure out, and ultimately combat, deepfake technology. However, the technology is quickly progressing, with some deepfake examples included in the CNN article that look almost indistinguishable from the real video it digitally manipulated.

In the twentieth century, photos and videos served as prime, factual evidence for some of the biggest news stories. But what happens if we enter a period where we can no longer trust our eyes and ears? What happens when we are forced to doubt every video we come across on our social media feeds? More importantly—what happens if we begin to dismiss real videos as fake?

What if people used deepfake technology during Watergate, 9/11, and other historical events in American history? What if the public—under suspicion of the knowledge of deepfakes—wrote off the Nixon tapes as mere manipulated audio files in an attempt to sabotage the president? Our issue may not be believing what’s false to be true, but believing what’s true to be false.

The only advice I can offer, before deepfakes inevitably become a serious issue in the 2020 election, is the same advice journalists have given the public since the rise of fake news in the 2016 election. Always consider your news source: Do they have a political bias? Are they reputable? Check the date the information was published. And most importantly, please make sure it’s not intended for humor.

In the near future, our eyes may not seem trustworthy enough for consuming media. Technology often provides us with solutions, yet unfortunate repercussions typically follow this resource. There’s no telling if deepfakes will cause a lasting damage on our political climate and democracy, but we can at least increase our awareness of the dangerous and changing environment on the internet as we enter this next election.

Leave a Comment

Comments (0)

The Berkeley Beacon intends for this area to be used to foster healthy, thought-provoking discussion. We welcome strong opinions and criticism that are respectful and constructive. Comments are only posted once approved by a moderator and you have verified your email. All users are expected to adhere to our comment section policy. READ THE FULL POLICY HERE: https://berkeleybeacon.com/comments/
All Sort: Newest

Your email address will not be published. Required fields are marked *