A trial conducted at Edwards Air Force Base involved aerial combat between two F-16 jets, one controlled by artificial intelligence (AI) and the other by a human pilot. Adding a twist, Secretary of the Air Force Frank Kendall was in the cockpit of the robotic aircraft during the flight that took place late last month.
The Pentagon aims to introduce the first AI-controlled aircraft into service by 2028 and eventually have a fleet of around 1,000 such planes. The choice of the F-16 is not random, as there are an abundance of the fighter jets in stock in the U.S. military inventory and it is slated to be phased out with the substantial entry of F-35s into service.
According to an Associated Press report, artificial intelligence is considered the most influential technology in military aircraft since the introduction of stealth technology in the 1990s. However, AI piloting is still not considered a fully grown technology, and it will take many years to understand how to leverage it in military contexts, especially considering that the regulations on the use of artificial intelligence in combat are still not entirely clear.
"We need this technology; it's a national imperative if we don't have it," Kendall declared to reporters during the trial. In a sort of irony, the AI-based F-16 is nicknamed Vista (arguably the worst version of Windows, according to many – only kidding). But judging by Kendall's smile as he disembarked from the cockpit after a full hour of experimental flight against a human pilot in a rival F-16, it's evident he already believes in AI's capabilities to conduct combat and even "decide whether or not to shoot its weapon."
The idea of AI making decisions to shoot or not at enemies is currently a red line. Weaponry experts and AI specialists, as well as humanitarian groups, point out the numerous dangers inherent in such a decision. For instance, it's challenging to predict how artificial intelligence will decide whether to fire or not. The process by which AI systems reach conclusions is still not entirely clear and is based on operators who often have limited access to them once they've been given a task. This is also one of the limitations facing the global deployment of autonomous vehicles, for example.
One danger is that AI might decide to launch attacks without consulting its sender, or it might reach incorrect conclusions about targets and attack without warning, risking unintended casualties. In short, it's a complex ethical and human problem, even for humans.
However, for Kendall, it's a system that will always be under human supervision. The shift to using AI in advanced weapon systems like fighter jets is, in many cases, a result of an arms race. China, for example, is also investing heavily in developing artificial intelligence technologies, including unmanned aircraft. While the U.S. Air Force is advanced and vast, it still relies on human pilots and is a limited resource, certainly compared to the human resources of the Chinese military. The Chinese Air Force, for example, is expected to surpass its American counterpart in terms of fighter jets in the coming years, and there too they are developing AI-based weapon systems.
One possible scenario in a war between the U.S. and China, for example, is a massive attack by unmanned aircraft in the first wave. The costs of the F-35 or F-22 do not allow for the production of enough aircraft to block a massive attack by hundreds of aircraft at once. AI-based, unmanned aircraft like the F-16 are the way forward, according to the government.
The U.S. has a tremendous advantage over China in one aspect – nearly a century of air combat training data. This is information that helps train artificial intelligence to cope in the real world. Americans also have access to data from allied nations, creating a vast database that injects AI with knowledge and experience that China simply doesn't have access to, nor do its allies: Iran, Russia, or North Korea have not conducted enough air combat with modern stealth aircraft, certainly compared to the U.S.
The first air combat involving Vista was in September 2023. Since then, dozens of such combats have occurred, allowing the model to be trained to a level where it even succeeded in downing human pilots in several engagements. It's probably very strange for the pilots themselves to be training the very entities that could one day render them irrelevant. However, they also understand that there's not much choice in the matter, as they likely wouldn't want to find themselves in the skies facing an enemy air force with its own AI-controlled aircraft.