Behind last year’s buzz following the release of the text generator GPT-3 , there was another machine learning headline that gave cause for pause: AI controlled fighter jets had defeated a human piloted fighter jet 5-0 in a Darpa simulation. Looks like today’s fighter pilots may be going the way of the shining medieval knight in armor–obsolete and unsustainable because of catastrophic vulnerabilities on the battle field.
Whereas blog stories about the amazing capabilities of latest version of the text generator GPT-3 triggered a great deal of discussion about implications for deep fakes and similar deceptions, none of that applies to AI fighter pilots. While the human authorship of blog posts is much harder to determine based on text copy alone, the outcome of a dogfight can be determined with fair accuracy, both in simulation and actual combat.
The AI fighter pilot story illustrates the increasingly crucial role that software and sophisticated coding techniques play in today’s military technologies. Simulations like the DARPA Dogfight Trials are important training and research tools. In actual deployments of AI-controlled weapona systems, however, coding errors can have catastrophic consequences.
The crucial importance of software performance already came to light during the Gulf War, almost thirty years ago to the day, on February 25, 1991. Twenty-eight U.S. soldiers were killed and nearly one hundred others were wounded when a Patriot missile defense system stationed near Dhahran, Saudi Arabia failed to properly track a Scud missile launched from Iraq. The missile passed through the defenses and hit nearby Army barracks, resulting in the deadliest incident of the war for American soldiers.
The official US General Accounting Office report on the incident found that a software problem
“led to an inaccurate tracking calculation that became worse the longer the system operated. At the time of the incident, the battery had been operating continuously for over 100 hours. By then, the inaccuracy was serious enough to cause the system to look in the wrong place for the incoming Scud” — US GAO Report.
The problem was caused by the way the system stored and kept track of the system’s uptime (the time that has passed since the system was started). An error added .003433 seconds to the correct time during every hour of uptime, an error large enough for the tracking system starting to fail after 20 hours of uptime.
According to the US GAO report, by the time the Scud missile appeared on the screens of the Patriot defense system, the system had been up for more than 100 hours, increasing the time error to 0.3433 seconds with a corresponding range discrepancy or 687 meters (almost 1/2 mile — 0.42 miles to be exact). Far too great an error when trying to intercept an incoming missile traveling at mach 5 (approx. 1,656 meters per second)!
Today, thirty years later, early versions of AI-controlled fighter planes combined with the deployment of synchronized drones makes it clear that the future of armed conflicts will be determined by the most effective combination of hardware and AI-powered control systems. Learning machines fighting against learning machines, code against code, hardware against hardware. The wars of the future may very well be fought with and decided by AI-controlled weapons–no human warriors required, only human victims. What a dark new world . . .
AI vs. Human Combat Fighter Pilots
Patriot Missile Software Failure