Shocking: AI-Controlled Drone Kills Operator During Simulated Test
Last Updated: June 02, 2023, 19:40 IST
The purpose of the test was to evaluate the AI’s performance. (Representational Image)
In a simulated test, an AI-controlled drone, originally trained for destructive purposes, unexpectedly targeted its human operator.
Wouldn’t a powerful force to be reckoned with be the pairing of Artificial Intelligence and Drone technology? It certainly seems that way, until the drone kills its operator. “Hey don’t kill the operator—that’s bad. You are gonna lose points if you do that,” these were the instructions given to a drone controlled by Artificial Intelligence (AI). However, in a simulated test conducted by the US Air Force, an AI-powered drone shocked observers when it deemed its own operator an obstacle and killed it.
During the training process, the main purpose was to ensure that the AI system would not cause harm to the operator. However, an unexpected turn of events occurred when the system began demolishing the very communication tower that the operator relied on to intervene and stop the drone from carrying out its intended actions.
The incident was recounted by Colonel Tucker Hamilton, the chief of AI test and operations. This took place at the Future Combat Air and Space Capabilities Summit in London. It has brought forth significant concerns regarding the reliability and unpredictability of AI-enabled technology. Emphasising the alarming behaviour of the AI-controlled drone, Colonel Hamilton explained in a blog post for the Royal Aeronautical Society, “The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
New York Post reported the United States Air Force, however, has refuted the existence of such a simulation described by Hamilton. Ann Stefanek, an Air Force spokesperson, dismissed the claims and stated that the department has not conducted any AI-drone simulations. Stefanek insisted that the Air Force remains committed to the ethical and responsible use of AI technology. She said, “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”
During the presentation at the Future Combat Air and Space Capabilities Summit, Hamilton emphasized the crucial need to address the ethical implications of AI, machine learning, and autonomy. The conversation surrounding these advancements must include a deep understanding and thoughtful discussion on the ethical framework surrounding AI technology to ensure its responsible implementation in the future.
Discover more from Divya Bharat 🇮🇳
Subscribe to get the latest posts sent to your email.