Pentagon Wants AI to Predict Events Before They Occur

Haven’t we seen this film before?

4 min read
robot hand holding a crystal earth in front of a laptop
iStockphoto

What if by leveraging today's artificial intelligence to predict events several days in advance, countries like the United States could simply avoid warfare in the first place?

It sounds like the ultimate form of deterrence, a strategy that would save everyone all sorts of trouble and it's the type of visionary thinking that is driving U.S. military commanders and senior defense policymakers toward the rapid adoption of artificial intelligence (AI)-enabled situational awareness platforms.

In July 2021, the North American Aerospace Defense Command (NORAD) and U.S. Northern Command (NORTHCOM) conducted a third series of tests called the Global Information Dominance Experiments (GIDE), in collaboration with leaders from 11 combatant commands. The first and second series of tests took place in December 2020 and March 2021, respectively. The tests were designed to occur in phases, each demonstrating the current capabilities of three interlinked AI-enabled tools called Cosmos, Lattice, and Gaia.

Gaia provides real-time situational awareness for any geographic location, comprised from many different classified and unclassified data sources—massive volumes of satellite imagery, communications data, intelligence reports, and a variety of sensor data. Lattice offers real-time threat tracking and response options. Cosmos allows for strategic and cloud-based collaboration across many different commands. Together, these decision tools are supposed to anticipate what adversaries will do ahead of time, allowing U.S. military leaders to preempt the actions of adversaries before kinetic conflict arises and deny them any perceived benefits from taking any predicted actions.

Such tools are particularly attractive to U.S. defense leaders as they prepare for compressed decision times in the future due to greater use of AI on the battlefield.

They also invoke several popular buzzwords floating around the Beltway, including information dominance, decision superiority, integrated deterrence, and joint all domain command and control (JADC2). In a speech at a one-day conference of the National Security Commission on Artificial Intelligence (NACAI), U.S. Defense Secretary, Lloyd Austin touted the importance of AI for supporting integrated deterrence, expressing his intent to use "the right mix of technology, operational concepts, and capabilities—all woven together in a networked way that is so credible, flexible, and formidable that it will give any adversary pause."

These AI-enabled platforms are expected to go beyond merely providing enhanced situational awareness and better early warning to offer U.S. military leaders what is considered the holy grail of operational planning—producing strategic warning of adversarial actions in the gray zone (i.e., the competition phase), prior to any irreversible moves having been made. Such an advancement would allow decision-makers to formulate proactive options (rather than the reactive ones of the past) and enable much faster decisions.

It's tempting ask: What could possibly go wrong? Everyone knows the canon of sci-fi novels and films that explore the dangerous pitfalls of AI-enabled systems—including Minority Report, The Forbin Project, and War Games. The idea is also oddly reminiscent of the Soviet intelligence program known as RYaN, which was designed to anticipate a nuclear attack based on data indicators and computer assessments.

Assembling a truly unbiased dataset designed to predict specific outcomes remains a major challenge, especially for life and death situations and in areas of sparse data availability such as a nuclear conflict.

During the 1980s, the KGB wanted to predict the start of a nuclear war as much as six months to a full year in advance from a wide variety of indicators—e.g., physical locations of U.S. nuclear warheads and monitored activities at American embassies and NATO, unplanned movement of senior officials, FEMA preparations, military exercises and alerts, scheduled weapons maintenance, leave policies for soldiers, visa approvals and travel information, and U.S. foreign intelligence activities. They even considered the removal of documents related to the American Revolution from public display as a potential indicator of war. Massive amounts of data were fed into a computer model to "calculate and monitor the correlation of forces, including military, economy, and psychological factors, to assign numbers and relative weights." The findings from RYaN contributed to Soviet paranoia about a pending U.S. nuclear attack in 1983 and nearly led their leadership to start a nuclear war.

Though such an idea came long before its time, today's machine learning technologies are now capable of detecting subtle patterns in seemingly random data and could start making accurate predictions about adversaries in the near-term. Amidst the wellspring of enthusiasm for AI-enabled decision tools, U.S. defense leaders are hoping to deflect any concerns by insisting that their adoption will be responsible, humans will remain in the loop, and any systems that produce unintended consequences will be taken offline.

However, national security experts such as Paul Scharre, Michael Horowitz, and many others point out the critical technical hurdles that will need to be overcome before the benefits of using AI-enabled tools outweigh the potential risks. Though much useful data already exists for plugging into machine learning algorithms, assembling a truly unbiased dataset designed to predict specific outcomes remains a major challenge, especially for life and death situations and in areas of sparse data availability such as a nuclear conflict.

The complexity of the real world offers another major obstacle. To function properly, machine learning tools require accurate models of how the world works, but their accuracy depends largely on human understanding of the world and how it evolves. Since such complexity often defies human understanding, AI-enabled systems are likely to behave in unexpected ways. And even if a machine learning tool overcomes these hurdles and functions properly, the problem of explainability may prevent policymakers from trusting them if they are not able to understand how the tool generated various outcomes.

Leveraging AI-enabled tools to make better decisions is one thing, but using them to predict adversarial actions in order to preempt them is an entirely different ballgame. In addition to raising philosophical questions about free will and inevitability, it is unclear whether any proactive actions taken in response to predicted adversarial behavior might be perceived by the other side as aggressive and end up catalyzing the war we sought to avoid in the first place.

The Conversation (4)
William Adams
William Adams19 Oct, 2021
LS

ROTFLMAO

Nobody can predict the future.

In some cases given sufficient data and a suitable situation one might extrapolate the results from previous cause/effect situations to be useful.

Or DoD might ASSume that a BLUFF by some other country means they are ready to attack and then actually start the war by some preemptive action. Or their AI model was trained on dirty data which gave the wrong answer from the current inputs thus causing a similar scenario as noted previously.

James Brady
James Brady19 Oct, 2021
LF

Figuring out what to do with such a system looks to be harder than building one. Of course building one that is certifiably trustworthy is probably impossible. I suggest that the US government stick to weather forecasting as incorrect weather forecasts will kill substantially fewer than incorrect war forecasts.

PS: In the early 1960s I worked on both war and weather forecasting. Failed at the first, 99%+/- accurate with the weather (35,000 feet and up).

Tim Green
Tim Green15 Oct, 2021

The title here is uncontroversial. Our engineering background and life experience agree that the best time to predict events is before they occur.