Camera apps have come a long way over the past few years. Earlier, if you had a smartphone with a camera, you were just happy to take a photo. Now, with so many camera apps around, users have innumerable options to edit their images the way they like.

You can create a cartoon of yourself, edit the background, add an object to the image, remove pimples and much more. All such things are good uses of the advancement in technology. However, like any other thing, advancement in camera technology has drawbacks as well.

One such major drawback or you can say the threat is the invention of a technology that gives users the ability to create videos or images that look very real. What is even more frightening is that such a technology, called DeepFake, is easily available to the masses.

In fact, anyone with a computer or similar device along with access to the internet can technically create DeepFake content. Tools are easily available on Github with which anyone can create Deep Fakes.

Basically, DeepFake uses AI (artificial intelligence) and machine learning to manipulate videos or any other form of digital representations. They result in images, videos or just audios that appear to be real.

Since DeepFake uses AI and machine learning, the tech analyzes the videos and images of the target person from all angles. Thereafter, the technology accurately mimics the behavior and speech of the target person.

The technology gained widespread attention in late 2017, when a fake porn video featuring Wonder Woman actress Gal Godot was reportedly published on Reddit by a user, whose name was deepfakes.

One recent example of DeepFake technology was a video of Facebook CEO Mark Zuckerberg that surfaced in July this year. In the video, the CEO is seen saying that he is in control of billions of people’s stolen data. Zuckerberg says in the video;

One man with total control of billions of people’s stolen data. All their secrets, their lives, their futures. I owe it all to Spectre. Spectre showed me that whoever controls the data, controls the future

The video looked totally real, including the voice. Facebook CEO isn’t the only victim of DeepFake; there are many, including Kit Harrington (Jon Snow from Game of Thrones), former President Barack Obama and many more.

Deeptrace, an Amsterdam-based cybersecurity startup, found 14,698 DeepFake videos present on the internet when it last did the count in June and July. Last December, the number of DeepFake videos was 7,964, which means an increase of 84% in just seven months.

Though most of the clips created using DeepFake technology are harmless, some could be devastating. Such as a celebrity in a compromising position, provocative words from public figures, and more. They could even prove a threat to national security as well.

Moreover, hackers could benefit from this as well. Hackers can use DeepFake technology just like any other phishing content. They can create a clickbait content that lures users into clicking on some malicious component inside the video or redirecting them to malicious websites.

Also, hackers can use it to perform frauds by creating fakes prominent enough to bypass identity verification software or biometric credentials. There have already been reports warning that criminals would soon have the ability to mimic human voices in real-time. Such tech could be used to target call centers.

Owing to all such things, DeepFake technology has raised serious concerns about how one could easily manipulate videos to spread misinformation and also tarnish anyone’s reputation. Efforts have been underway to curb such use of the technology. But, so far, there hasn’t been much success.

Recently, the U.S. House of Representatives approved a bill to tackle the rising threat of DeepFake technology. The bill, called the IOGAN Act (Identifying Outputs of Generative Adversarial Networks Act), was initiated by Ohio Republican representative, Anthony Gonzalez.

This act strengthens the NIST (National Institute of Standards and Technology) and NSF (National Science Foundation) to invest and research more in technologies that could help in differentiating DeepFake from genuine content.

In July, the U.S. House of Representatives’ Intelligence Committee also asked major social media companies about their plans to fight DeepFake in the 2020 election. The inquiry from the Intelligence Committee followed an incident where President Trump tweeted a DeepFake video of House Speaker Nancy Pelosi.

Along with the political will, private companies and researchers are also coming forward to develop technologies that help in the detection of DeepFake content. Since AI is being used to create DeepFakes, it can also help to detect it.

Companies such as Facebook and Microsoft are working to detect and remove such videos. Earlier this year, both the companies revealed their intentions to work with top universities across the U.S. on developing a database of fake videos.

In September, Facebook, Microsoft, Amazon Web Services (AWS) and academics from many top U.S. universities announced a Deepfake Detection Challenge. The objective of the challenge is to develop open source detection tools. Facebook said in a blog post:

The [hope] is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer

What can you do? You can also spot DeepFake content and help to remove it from the internet. If you closely study a DeepFake video, you can tell the difference. Often ears or eyes won’t match to the borders of the face.

However, experts note that the difference between a DeepFake and real video is getting narrower as more advancement is being made in the technology.

Going ahead, it will be interesting to see what happens with DeepFake – whether it gets so real that people start to believe it, or researchers come up with tools to efficiently flag any fabricated video or image.

PiunikaWeb started as purely an investigative tech journalism website with main focus on ‘breaking’ or ‘exclusive’ news. In no time, our stories got picked up by the likes of Forbes, Foxnews, Gizmodo, TechCrunch, Engadget, The Verge, Macrumors, and many others. Want to know more about us? Head here.



Aman Jain
49 Posts

Aman is MBA (Finance) and is currently leading VeRa FinServ, a Financial Research firm. He loves to watch science fiction movies, follow latest tech news, play video games and (ofcourse) cricket.

Next article View Article

[Updated] Instagram crashing on all Android phones, but there are workarounds

Here's the crux of the article in video form: New updates are being added at the bottom of this story……. Original story from (June 5, 2018) follows: We're...
Jul 10, 2023 6 Min Read