Carnegie Endowment for International Peace
·
Published
June 17, 2024
How AI Might Affect Decisionmaking in a National Security Crisis
Liberal
Policy Analysis
·
Artificial Intelligence
Share this article
Summary
- Christopher S. Chivvis and Jennifer Kavanagh at Carnegie Endowment for International Peace discuss the potential for AI to both enhance and complicate decision-making within the U.S. National Security Council, highlighting challenges like information overload and misperceptions.
- The article asserts that advanced AI could combat groupthink by offering diverse perspectives but also risks intensifying it due to overconfidence in AI systems, and emphasizes the need for training and AI governance to ensure effective use and stability in crises.
Overview:
This article was written by Christopher S. Chivvis and Jennifer Kavanagh at Carnegie Endowment for International Peace.
- AI systems can both accelerate and complicate decision-making in national security scenarios.
- Overconfidence in AI recommendations could lead to groupthink and potentially dangerous misperceptions.
Key Quotes:
- "AI-enabled systems can help accelerate the speed of commanders’ decisions and improve the quality and accuracy of those decisions."
- "In reality, AI systems are only as good as the data they are trained on, and even the best AI have biases, make errors, and malfunction in unexpected ways."
What They Discuss:
- The proliferation of AI in national security could slow decision-making because AI systems produce additional data that need to be evaluated.
- AI’s potential to create uncertainty in crisis situations involves deepfake videos and potentially misleading information.
- AI might challenge existing groupthink in decision-making settings by offering out-of-the-box ideas but could also entrench it if decision-makers over-rely on AI recommendations.
- The development of AI tools by well-funded agencies could disturb the balance of influence among key governmental bodies like the Department of Defense and Intelligence Community.
- Misjudging adversary actions influenced by AI systems could escalate crises due to the risk of miscalculation.
What They Recommend:
- Implement thorough training for policymakers on AI systems to understand their limits and capabilities.
- Establish an AI governance regime similar to arms control to manage and reduce risks of AI deployment in military contexts.
- Foster international cooperation, especially between the U.S. and China, on AI safety and governance measures.
Key Takeaways:
- AI has the dual potential to both streamline and complicate crisis decision-making processes.
- Training and prior experience with AI tools are crucial for their effective and safe use.
- Establishing clear norms and agreements on AI use is important for reducing the risk of misperceptions and unintended escalations.
- Policymakers must be wary of AI’s potential to sway groupthink and maintain a balanced approach incorporating human judgement.
This is a brief overview of the article by Christopher S. Chivvis and Jennifer Kavanagh at Carnegie Endowment for International Peace. For complete insights, we recommend reading the full article.
Original Read Time
9 min
Organization
The Brookings Institution
Category
Israel-Gaza War
Political Ideology
Center Left