Is the World going to be Deceived by Deepfakes?

Adriano Marques August 17, 2020

In this week’s Exponential Chats, some of the team members responsible for Amalgam’s development will have a chat addressing a common concern about Deep Fakes deceiving the world with ultra realistic images, videos, and voice synthesis. Whether you’re concerned or skeptical, come join us and participate by asking questions or giving your opinion in the chat.

– Adriano Marques is the founder and CEO of Exponential Ventures.
– Nathan Martins is a Machine Learning/DevOps Engineer at Exponential Ventures, where he works on projects to democratize AI, as well as other cutting edge innovations.
– Harlei Vicente is a Senior software architect specialized in creating great experiences for the end user and optimizing interfaces for desktop and mobile.
– Igor Muniz is a Data Scientist specialized in deep learning models and Kaggle competitor in his free time.

Deepfakes are digitally fabricated and ultra-realistic forms of media that can deceive a human into thinking that what they see or hear is real. More often than not, Deep Learning is the Machine Learning technique used to generate the fake content, thus the name deepfake.

This is not something new – deepfakes has been developed by researchers since the early 90s, but only after the AI boom of 2016 that this area started to see some impressive results. In 2017 researchers published a paper along with a video on Youtube in which they took an audio of one of Barack Obama’s addresses and generated a high quality fake video of the president speaking with accurate lip sync but in a different scenario and different head gestures. As impressive as this was, you would still hear the original Barack Obama while watching his image. But in 2018 the actor Jordan Peele recorded his own speech speech imitating Barack Obama and generated a fake video which was almost as convincing as any other video you would have watched of Barack Obama addressing the nation. That’s when people began to realize the dangers and challenges of deepfakes.

And this technology is not limited to videos only. Some Deep Learning models can synthesize voice that imitates another person very convincingly. In 2019 the CEO of an energy firm based in the United Kingdom was the victim of a scam in which he received a call ordering him to transfer 220k euros to a Hungarian Bank account. The callers were presumably using deepfake technology to impersonate the Chief Executive of the parent company in this call.

On top of that, it is now also possible to generate still images of people that doesn’t even exist. We call them sock puppets and these ultra realistic images of people that never existed are used by robots in various social networks. Some of them are generated by the thousands and their sole goal is to carry out disinformation campaigns. Previously we were wondering if the person we’re chatting with is really the person whose image we see in their profile, but soon we’ll need to start wondering whether we’re talking to a real person and not a robot.

And all of that in combination with the growing societal division and the disinformation campaigns being conducted by interest groups are causing a lot of concern on what needs to be done in order to avoid the chaos. Lots of intellectuals, politicians, and activists are working to fight the evolution of deepfakes and have expressed their vision of the impending doom. But we can’t help but wonder whether this is a real concern and if it is, what are the actual challenges involved in dealing with deepfakes.

And that is the backdrop of our discussion today. Is the world going to be deceived by deepfakes?

Exponential Chats is a live event conducted by our parent company, Exponential Ventures. In this event, our team members and guests have an in-depth conversation about Exponential Technologies, Entrepreneurship, and some of the world’s most challenging outstanding problems.

Stay In The Loop!

Subscribe to receive Artificial Intelligence content that will help you perform better.
Don't worry, we don't SPAM. Ever.