top of page

The Potential Dangers of Deep Fakes

What are deep fakes?


In June 2019, a video edited to look like a news clip, surfaced of Mark Zuckerberg giving thanks to Spectre and indirectly implying that he had access to stolen data of over a billion people. The year before, a clip of former president of U.S.A, Barack Obama went viral in which he explicitly insulted, then president, Donald Trump.


On a more amusing and enthralling side, a video of Jon Snow apologizing for ending Game of Thrones surfaced after 1.6 MILLION fans demanded a better script for the ending. Well, he didn’t apologise. And Mark Zuckerberg didn’t make such a statement. And no, Barack Obama didn’t make such harsh comments towards Donald Trump (but it would have been another great scandal for sure).


These videos were deepfakes. They are videos using a form of artificial intelligence called deep learning to make images of an individual to be photoshopped on another individual’s body to supposedly showcasing that they engage in inappropriate acts and display inappropriate behaviour. By that I mean, typically. Sometimes they’re amusing. Like someone on tiktok trying to impersonate Tom Cruise and dance.



via bbc.com

What’s the potential harm?


Deepfakes started as a concept in 1997, when three men wrote a paper about developing a program that could synthesize audio video clips from audio output; which is the essential idea of a deepfake. Studies show that 99% of the deepfakes created in 2018, were all for pornographic purposes, where the images of female celebrities were mapped onto pornstars. Curating these videos and content, clearly shows that it is being weaponised against women.


Additionally, big scandals have been unnecessarily created due to political figures being a subject to these videos. Like, imagine, a video of a world leader/ influencer just out of the blue saying ‘I have access to your personal information’; strange right? Famous politicians such as Joe Biden and Vladimir Putin, have been synthesised in such videos saying false statements and these scandals have had legal offenses after months of finding the creators. These aren’t jokes.


With the increase in the amount of deepfakes created every day, it’s becoming harder to trust what is actually going on in an increasingly complex world.


Is there a solution?

Well, the answer is, no. There is no present solution to know exactly who is going to upload a deepfake and from where or how many deep fakes are out there.


However, a student from Stanford as well as a group of MIT graduates, did create an algorithm which can check potential deepfakes to confirm that they are deepfakes. Even though it only has an 80% possibility of being right, it is till date the closest to finding a solution. What it does is simply detects mismatches in the lip movement with the sounds.


The creator of the algorithm did say that only a technical solution will not work, a non-technical action needs to be taken, and he could not be more right! Focusing on such non-technical solutions can help reduce the misinformation. For example, a deepfake of Biden was detected by the interviewer when he realized his question had been altered.


To sum up this article, it becomes a real task in this increasingly complicated interweb as legal penalties and holding people accountable for their mistakes is not fixated on. Most of these photoshopped videos can put an individual’s career and reputation on the line or create a scandal, and are succeeding to do so as well.

via wired.com


0 comments

Recent Posts

See All
bottom of page