Deep Learning and the fear of frauds
Soon we might live in a world where one can never be sure that video and voice recording is real, no matter how realistic it looks and sounds. Deep learning methods are used with artificial neural networks to create what is known as deepfakes – visual and audio content that, to the naked eye, looks absolutely real. The potential uses of deepfakes are limited only by the imagination of people who have access to the technology required to manufacture them.
As technology advances, the tools for creating deepfakes will inevitably become cheaper and easier to use. You do need a lot of video and audio data to train AI to generate deepfakes. This will mean that people who are out of the public limelight are somewhat safer than celebrities. However, a disinformation campaign targeting public figures, especially politicians, harms the public in that it makes it more difficult for citizens to make informed decisions when voting, for instance.
Deepfakes could make false news stories, designed to influence political life, much more powerful. Whereas someone, reading a piece of falsified information, might be inclined to question it, seeing a fake video might cause the viewer to jump to conclusions before they have time to think and question what they are hearing and seeing.
With the latest developments in mobile technology, for many people, numerous sources of information are within an arm's reach 24 hours a day. The proliferation of information channels and the ineluctable streams of information directed at us all the time have made information overload a real challenge. Whereas in the past people had to make an effort to acquire information, they are now forced to make decisions about what information to consume and what to filter out.
More and more people now have knowledge of and are involved in politics. The voter turnout at the 2020 presidential election was the highest since year 1900. The widened spread of information has positive effects. It is not all about disseminating fake news. An indirect advantage is that hopefully people are learning to verify information and spot fakes.
On a positive note, deepfakes could be used in benevolent ways. For instance, this technology could help people who loose ability to speak due to throat cancer to produce speech in a realistic voice based on past recordings. Deepfakes could also be applied in education settings, such as museums, to create interactive exhibits, or to produce a visual representation of historical events. We will probably see deepfakes used more and more in the entertainment industry, for special effects in films.
In a similar fashion, deep learning can be used to generate text that mimics style and content of an existing human author, or music that imitates the style of an existing band or musician. This could be great for audiences, especially in those cases when the author, or musician has passed away, and the fans have a nostalgic desire to see work that reminds them of their idol.
One dilemma that could arise is who would own the rights to the content generated based on pre-existing work of a living author. When consumers of creative content buy a book or a music recording, for instance, they come to own a copy of the work. However, they do not own the rights to the content itself. Therefore, they can use the copy in any way, as long as their actions do not infringe on copyright. For instance, when quoting from a book, we typically have to give credit to the author. By the same token, if copyrighted content is used to train AI with an intent of producing similar content, this might be viewed as copyright violation. Especially if the AI output is then used for profit. In the future, we might see amended copyright laws that handle AI created content.
We can draw a parallel to the disruption that peer-to-peer copying of digital content caused in the early two-thousands. Prior methods of copying music did not yield good enough quality and could not be used to mass produce pirated copies. However, with propagation of digital technology, peer-to-peer copying became a real concern for copyright holders. In a similar fashion, fan fiction written to imitate famous authors will not hurt sales of the original books, unless it can be quickly produced in massive quantities.
In short, deepfakes could drastically change our reality, or our perception of it. Governments could try to regulate this technology to prevent wrongdoers from harming citizens, and the public will probably want to see such regulation in place. But when government agency of one country intends to use deepfakes to influence another country’s internal politics, such intervention can only be stopped by an international body. So, in the future we might see treaties and conventions restricting the use of AI and other technologies in the global political struggle.