Entering the AI Era: Implications for Disinformation and Foreign Policy
In March 2022, only a month after the Russian invasion of Ukraine, a video of Ukrainian President Volodymyr Zelensky urged Ukrainians to lay down arms and surrender, surprising international audiences. Was this the end of the Russo-Ukrainian War?
But the enlarged head and deep voice gave it away. The video was actually a deep fake of the Ukrainian leader, a synthetic likeness created using artificial intelligence (AI).
With applications as benign as generating Netflix recommendations to as lethal as autonomous drone strikes, AI has major implications for the future of global politics. As evidenced by the deep fake of President Zelensky, AI empowers any user to create and perpetuate disinformation with destructive political consequences. Given their ever-evolving nature, the risks posed by AI call for new foreign policy.
The Challenge for Foreign Policy
Current AI policy fails to consider the current capabilities of AI—and potential future development. Policymakers still rely on strategies introduced nearly a century ago when they confronted an earlier technology that jeopardized global security: nuclear weapons.
Following the end of World War II and the detonation of atomic bombs in Hiroshima and Nagasaki, the Cold War ushered in an era of geopolitical tension between the United States and the Soviet Union as each superpower raced to develop the most lethal nuclear arsenal.
Nuclear weapons introduced new challenges to diplomacy. The level of possible destruction raised the stakes of maintaining global peace and security. Given that both sides could obliterate the other at any moment, mutual fear led to a strategy of nuclear deterrence. Eventually, leaders exchanged fear in favor of international accords. The adoption of the Strategic Arms Limitations Treaty (1969), Anti-Ballistic Missile Treaty (1972), and Strategic Arms Reduction Treaty (1991) placed ceilings on nuclear weapons development and limited states’ destructive capabilities.
Similarly, AI presents competition between world powers while possessing the capacity for world destruction. As China and the United States compete in an arms race for AI supremacy, these countries accelerate the development of powerful technology while diverting energy away from cooperation about its use.
With AI emerging as the next major technological threat to society, policymakers should remember the importance of international regulation of nuclear weapons to prevent a global catastrophe. Despite the parallels between nuclear weapons and AI development, nuclear policy fails to address the distinguishing feature of AI—its unbelievably rapid development.
New Day, New AI
The term “artificial intelligence” was initially coined by Alan Turing in the 1950s when he theorized about recreating human intelligence in a machine. In the following decades, scientists developed computers and algorithms with ever-more-advanced data processing capabilities, contributing to a new area of computer science called “machine learning.” Building on Turing’s initial idea of modeling human thought processes in a computer, machine learning involved using algorithms to teach computers to perform tasks for which they were not explicitly programmed.
More advanced than machine learning, the development of “deep learning” in the 1980s brought AI closer to its present form. Unlike machine learning, deep learning uses a complex structure of algorithms modeled on the human brain, enabling computers to learn through experience by developing “neural networks.” Deep learning equips machines to work independently, adopting capabilities beyond what humans can explicitly program.
Trained through experience, AI algorithms change as they run. According to Dr. Ben Zhao, Neubauer Professor of Computer Science at UChicago, who studies the capabilities and public implications of AI, the ever-developing nature of AI poses unique challenges that are difficult for policymakers to predict.
“The problem is that every single time you identify a weakness, all it takes is one iteration of the model to get better,” Dr. Zhao explains. “Whatever weakness you identify, you train into the model and tell it explicitly not to do that.”
As a consequence of this learning, AI devices continue getting smarter and more advanced—faster than any human can keep up with.
Dr. Zhao fears that AI might be misused, resulting in serious political implications. “Misinformation and disinformation will be terrible,” he says. “Because it will be easy to synthetically generate content, people will either believe disinformation wholesale or become completely numb and believe nothing.”
AI and the Acceleration of Disinformation
AI possesses the power to revolutionize human conceptions of information by blurring the line between true and false to the point of indistinguishability.
In addition to deep fakes such as the video of President Zelensky, recommendation systems are common propagators of disinformation. Recommendation systems learn user behavior to cluster content, potentially directing users to extreme content and conspiracies.
“If you’re a big consumer of certain conspiracy theories, then you’ll just find more in your search results,” Dr. Zhao explains. “YouTube and other platforms can be dangerous because they reinforce anything that they observe.”
Many popular platforms, including Facebook and Twitter, exhibit this behavior, often with destructive consequences. Just look at the rise of QAnon and the insurrection at the Capitol on January 6th: by spreading false political conspiracies about Trump and the presidential election, QAnon encouraged its adherents to storm the Capitol, resulting in millions of dollars of property damage, five deaths, and threats to national peace and democracy.
AI also impacts politics by manipulating feedback systems leaders use to make government decisions. To gauge the status of local problems and citizen attitudes, leaders employ AI to analyze data and statistics; however, leaders must be cognizant of flaws in these systems. Because they only learn what they have seen, recommendation systems tend to reflect the bias of their creators, distorting their analysis to align with leaders’ prejudices instead of representing public feedback.
Distorted data leads to misinformed decisions, such as President Xi Jinping’s failed anti-poverty campaign in China. His reliance on AI to determine the status of rural impoverishment proved to be limited by bias. Officials gamed the data by stashing rural citizens in urban apartments, and as a result, the AI reports indicated a reduction in poverty, leading President Xi to mistakenly believe that his policy was successful. As this case demonstrates, AI bias increases the importance of collecting correct data, which is especially challenging in autocratic regimes where the only available information aligns with the leader’s views.
AI’s ability to generate disinformation at a large scale further amplifies its consequences. Prompted with keywords, AI can automatically produce masses of synthetic text or images. For example, to confuse the public and advance the regime’s political agenda, Russia launched a disinformation campaign by using AI language models prompted to create false commentaries. With AI-generated propaganda polluting information spaces, the Russian public was unable to discern true from false, and therefore unable to comprehend regime action, much less protest it. AI’s ability to imitate journalistic style creates an illusion of credibility that increases its audience’s willingness to accept disinformation, thereby compounding the effects of disinformation.
With rapidly advancing capabilities for analyzing, directing, and generating data, AI can be used to create and propagate disinformation---often imitating reality—resulting in the spread of conspiracies, confusion, and chaos. Given these consequences, there is an urgent need for global policymakers to address the dangers of AI and curb the consequences of disinformation.
The Need for a Diplomatic Change
The constantly improving capabilities and unpredictable ramifications of AI necessitate international cooperation to regulate its use.
Global agreements such as the Organisation for Economic Cooperation and Development’s (OECD) AI Principles lay out values and recommendations for policymakers to employ AI while respecting democratic values.
Although many governments around the world have already committed to this international agreement, there is still much to be done to combat issues of disinformation enabled by AI technology.
Taking lessons from nuclear deterrence strategy, global tech powers, such as the United States and China, should avoid getting into another Cold War-type arms race that would accelerate malicious AI development and misuse. Instead, global policymakers should devote their energy to generating informed policies about AI and reaching international accords to govern the use of this rapidly developing technology.
Of course, the first step toward generating informed policy is to become more knowledgeable about AI. Dr. Zhao suggests that policymakers need to close the gap between “tech people” and policymakers. He explains that “the speed of AI development makes the need for policymakers with computer backgrounds all the more important.” If they cannot develop the necessary expertise themselves, then “policymakers need to hire technology people.”
Before any steps can be taken to combat AI disinformation, policymakers must first understand how the technology works and realize everything that is at stake. By applying an informed policy that targets AI misuse and disinformation, leaders can counter the political consequences of circulating conspiracies, misinformed decisions, and widespread public confusion. As AI evolves, so must foreign policy, to address new issues and protect international security.
The image featured in this article is licensed from reuse under the Creative Commons 0 1.0 Universal (CC0 1.0) Public Domain Dedication license. No changes were made to the original image, which was taken by Gerd Altmann and can be found here.