AI can enable ‘Swarm Warfare’ for tomorrow’s fighter jets

But Missy Cummings, a Duke University professor and former fighter pilot studying automated systems, says the speed with which decisions have to be made about fast-moving aircraft means that any AI system will be largely autonomous.

She is skeptical that advanced AI is really needed for dogfights, where planes can be guided by a simpler set of hand-coded rules. She is also wary of the Pentagon rushing to apply AI, saying that mistakes can undermine confidence in technology. “The more the DOD presents bad AI, the less pilots, or anyone associated with these systems, will trust them,” she says.

AI-powered fighter jets can eventually perform parts of a mission, such as measuring an area autonomously. For now, EpiSci’s algorithms teach to follow the same protocols as human pilots and to fly like another member of the squadron. Gentile flew simulated test flights where the AI ​​took full responsibility for avoiding collisions in the air.

The military acceptance of AI just seems to be accelerating. The Pentagon believes that AI will be critical to future warfare and is testing technology for everything from logistics and mission planning to reconnaissance and combat.

AI started crawling in some planes. In December, the air force used an AI program to control the radar system aboard a U-2 spy plane. While not as challenging as controlling a fighter jet, it is a life or death responsibility because the miss of a missile system on the ground can expose the bomber to the attack.

The algorithm used, inspired by one developed by Alphabet subsidiary DeepMind, learned through thousands of simulated missions how to aim the radar to identify enemy missile systems on the ground, a task that would be critical for defense in an actual mission.

Sal Roper, who retired as the assistant secretary of the Air Force in January, says the protest was in part to show that it is possible to track the implementation of new code on older military hardware quickly. “We did not give the pilot’s override buttons because we wanted to say, ‘We have to get ready to work this way where AI really has control over the mission,'” he says.

But Roper says it will be important to ensure that these systems work properly and that they are not vulnerable themselves. “I’m worried we’re relying too much on AI,” he says.

The DOD may already have some issues of trust regarding the use of AI. A report last month from Georgetown University’s Center for Security and Emerging Technology found that few military contracts regarding AI state that the design of systems is reliable.

Margarita Konaev, a research fellow at the center, says the Pentagon seems aware of the matter, but that it is complicated because different people tend to trust AI differently.

Part of the challenge comes from how modern AI algorithms work. With reinforcement learning, an AI program does not follow explicit programming, and can sometimes learn to act in unexpected ways.

Bo Ryu, CEO of EpiSci, says his company’s algorithms are designed in line with the military’s plan for using AI, with a human operator responsible for deploying lethal force and being able to control it at any time. take. The company is also developing a software platform called Swarm Sense to enable teams of civilian drones to map or inspect an area together.

He says EpiSci’s system is not only based on reinforcement learning, but also contains handwritten rules. “Neural nets definitely hold a lot of benefits and gains, no doubt about it,” Ryu says. “But I think the core of our research, the value, is finding out where to place one and not.”


More great wired stories

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button