The development and application of AI are becoming increasingly widespread, and an emerging new generation of weapons is being integrated with AI, which has raised concerns among experts who say that AI lacks moral constraints and may lead humanity to an “Oppenheimer Moment.”
During an online briefing in early May, State Department arms control official Paul Dean announced that the United States, the United Kingdom, and France had made a “clear and strong commitment” that humans should have total control of nuclear weapons, rejecting AI control or decision-making. He also encouraged China and Russia to make the same commitment.
Japanese computer engineer Kiyohara Jin spoke to The Epoch Times on May 7. He said, “AI is a product based on data analysis, and it has significant differences from human decision-making. Humans possess emotional and moral constraints, but AI lacks these. Allowing [AI] to control nuclear bombs is as terrifying as authoritarian states like China and Iran possessing biological weapons.”
US Completes AI Fighter Jet Test
Military application of AI is already widespread, with weapons such as tanks with AI targeting systems, drones that can automatically attack targets, and AI-equipped flamethrowing robot dogs. There are worries that AI will extend to even more powerful weapons.On May 5, Edwards Air Force Base in California tested an AI-controlled F-16 fighter jet. The F-16 engaged in a dogfight with another F-16 piloted by a human. The two jets maneuvered within less than 1,000 feet of each other, attempting to force the opponent into a vulnerable position.
The human pilot, Air Force Secretary Frank Kendall, said about the AI technology, “It’s a security risk not to have it. At this point, we have to have it.”
The U.S. Air Force is actively deploying AI in fighter jets. Last February, the
Department of Defense conducted 12 flight tests in which AI was used to pilot aircraft to perform advanced fighter maneuvers at Edwards Air Force Base in California. The Air Force plans to have a fleet of over 1,000 AI-controlled F-16s by 2028.
Jeff Clune, an associate computer science professor at the University of British Columbia focusing on AI and machine learning, was among a group of influential researchers to oppose excessive AI control of weapons, as they are concerned that one day an extremely knowledgeable AI may go rogue and act without human control. Tech moguls such as
Elon Musk and OpenAI CEO Sam Altman have all warned about the risks and suggested stricter restrictions on the use of AI in weapons.
The
International Committee of the Red Cross also warned that people are generally concerned about handing over life-and-death decisions to these AI-controlled weapons, calling on the international community to better address this situation.
Chinese military commentator Stephen Xia told The Epoch Times on May 9, “AI-controlled nuclear weapons are one of the problems of militarized AI. Although it has not happened yet, the probability is high given current AI development. Therefore, the United States and the West hope the world can reach a consensus on the issue of AI not controlling nuclear weapons. The harm brought by the military application of AI may be irreversible.”
The Next “Oppenheimer Moment”
At the end of April, a two-day conference was held in Vienna, Austria, calling on all countries to jointly discuss the issue of AI militarization. Over 1,000 participants from over 140 countries participated in the conference, including political leaders, experts, and members of civil society.“This is the Oppenheimer Moment of our generation,” Austrian Foreign Minister Alexander Schallenberg said at the Conference on Autonomous Weapons Systems.
This statement refers to American physicist Robert Oppenheimer’s involvement in the successful development of the atomic bomb, which brought victory to the United States and its allies in WWII, thus earning him the title of “Father of Atomic Bomb.” However, Oppenheimer witnessed the disaster the atomic bomb brought to the Japanese people and the indelible scars it left on the world, leading him to question whether his decision to help develop the atomic bomb was correct.
“Autonomous weapons systems will soon fill the world’s battlefields. We already see this with AI-enabled drones and AI-based target selection,”
Mr. Schallenberg said.
He elaborated that there is an urgent need for internationally recognized rules and norms in the field of AI to prevent AI from going rogue and threatening humanity. People need to ensure that weapons are controlled by humans, not AI.
The Costa Rican Minister of Foreign Affairs, Arnoldo André Tinoco, expressed concern about terrorists and other authoritarian regimes using AI in war.
“The easy availability of autonomous weapons removes limitations that ensured only a few could enter the arms race,” he said at the conference.
The Associated Press contributed to this article.