The explosion of the artificial intelligence in recent months it has taken many by surprise. While generative AIs like ChatGPT or Midjourney represent a paradigm shift, some anticipate that this technology will bring more problems what benefits. The idea of malicious actors using the deepfakes to mount disinformation campaigns is a concern latent from regulators and companies like Microsoft.
The latter addressed the issue during an appearance before US legislators. Brad Smith, president of Microsoft, stated that one of his biggest concerns is related to the proliferation of deepfakes. This content, also known as synthetic media, consists of AI-generated videos or images that seek to imitate the appearance and sound of a person.
According to Smith, it is necessary to address the problems related to these forgeries, mainly in foreign influence operations. The president of Microsoft mentioned that countries like Russia, China or Iran would take advantage of AI and urged lawmakers to create new laws to protect national security. The manager asked Washington to take measures to ensure that people know how to recognize the deepfakes and don’t fall into the trap.
“We need to take steps to protect against tampering with legitimate content with the intent to mislead or defraud people through the use of AI”
Brad Smith, President of Microsoft
In a blog post, the Microsoft president mentioned that the public should be empowered to recognize AI-generated content. For this he suggests the implementation of labels that inform when an image or video was created by artificial intelligence and not a human. “This labeling obligation should also protect the public from alteration of original content and the creation of deepfakes“, he mentioned.
What are deepfakes and why are they dangerous?
Despite the fact that many have known the benefits of artificial intelligence thanks to ChatGPT, the deepfakes They’ve been around the web for years.. These videos, images or audio generated by using deep learning and neural networks, aim to fool people. Its creators use thousands of data to train an algorithm that will replace one person’s face with another.
The term was coined in 2017, when a Reddit user named “Deepfake” uploaded porn videos in which he swapped faces with famous actresses like Gal Gadot or Emma Watson. The result was surprising and forced adult websites to take measures to prevent the proliferation of this content. Later, a BuzzFeed video of Barrack Obama calling Donald Trump a complete idiot stole headlines.
With the advancement of technology, the deepfakes they became more realistic. Tom Cruise’s videos on TikTok broke the internet, while companies like Lucasfilm decided to use them to rejuvenate actors from Star Wars. At the same time, some governments and political actors have used AI to launch disinformation campaigns that influence and change public opinion.
The arrival of generative AIs has only made the situation worse. Now it is possible to generate realistic images of Pope Francis, Donald Trump or Japan’s supposed health minister that will confuse the whole world. The latest version of Midjourney removed its free trial to prevent people from abusing it in the creation of false content.
AI regulation would prevent deepfakes, albeit with nuances
During his participation in front of the Washington committee, the Microsoft president also asked legislators to tighten export rules. smith wants prevent AI models from falling into the hands of third parties — like China —, for which reason it calls for an evolution of the current laws. It is worth mentioning that the United States has tightened its policy against China and already prohibits the export of chips and material to produce artificial intelligence semiconductors.
“We need to take steps to protect against tampering with legitimate content with the intent to mislead or defraud people through the use of AI”
Brad Smith, President of Microsoft
A few days ago, Sam Altman appeared before the US Senate to speak on AI regulation. The director of OpenAI shared the vision on the dangers posed by deepfakes in campaigns to influence people’s opinion, mainly at the electoral level. Altman suggested creating an international regulatory body responsible for licensing and de-licensing the most powerful AI systems.
Though Brad Smith and Sam Altman are appalled by what foreign powers might do, the real enemy is at home. An investigation of The Intercept revealed that the Special Operations Command of The United States plans to use deepfakes in propaganda campaignsto. In a document dating from 2020, SOCOM calls for AI technology for influence and disinformation operations.
Agents would hack devices to learn their enemies and subsequently use deepfakes to deceive them and change their way of thinking. With this, the United States would add artificial intelligence to its arsenal of tactics to destabilize governments that are not aligned with their ideology or their economic policy. The country has extensive experience in this field, with operations in various countries in Africa, Latin America and Asia.