IN-DEPTH: ‘It Is Skynet’: Pentagon Envisions Robot Armies in a Decade

IN-DEPTH: ‘It Is Skynet’: Pentagon Envisions Robot Armies in a Decade
(Illustration by The Epoch times/Shutterstock)
Andrew Thornebrooke
4/19/2023
Updated:
5/3/2023
0:00

WASHINGTON—Robotic killing machines prowl the land, the skies, and the seas. They’re fully automated, seeking out and engaging with adversarial robots across every domain of war. Their human handlers are relegated to the rearguard, overseeing the action at a distance while conflicts are fought and won by machines.

Far from science fiction, this is the vision of Joint Chiefs of Staff Chairman Gen. Mark Milley.

The United States, according to Milley, is in the throes of one of the myriad revolutions in military affairs that have spanned history.

Such revolutions have spanned from the invention of the stirrup to the adoption of the firearm to the deployment of mechanized warfare and, now, to the mass fielding of robotics and artificial intelligence (AI).

It’s a shift in the character of war, Milley said, that’s greater than any to have come before.

“Today, we are in ... probably the biggest change in military history,” Milley said during a March 31 discussion with Defense One.

“We’re at a pivotal moment in history from a military standpoint. We’re at what amounts to a fundamental change in the very character of war.”

Chairman of the Joint Chiefs of Staff Gen. Mark Milley testifies before the House Committee on Appropriations Subcommittee on Defense during a hearing for the fiscal 2023 Department of Defense budget, in Washington on May 11, 2022. (Jose Luis Magana/AP Photo)
Chairman of the Joint Chiefs of Staff Gen. Mark Milley testifies before the House Committee on Appropriations Subcommittee on Defense during a hearing for the fiscal 2023 Department of Defense budget, in Washington on May 11, 2022. (Jose Luis Magana/AP Photo)

Robotic Armies in 10 Years

Many would no doubt be more comfortable with the idea of robots battling for the control of Earth if it were in a science-fiction novel or on a movie screen rather than on the list of priorities of the military’s highest-ranking officer.

Milley said he believes, however, that the world’s most powerful armies will be predominantly robotic within the next decade, and he means for the United States to be the first across that cybernetic Rubicon.

“Over the next 10 to 15 years, you’ll see large portions of advanced countries’ militaries become robotic,” Milley said. “If you add robotics with artificial intelligence and precision munitions and the ability to see at range, you’ve got the mix of a real fundamental change.”

“That’s coming. Those changes, that technology ... we are looking at inside of 10 years.”

That means that the United States has “five to seven years to make some fundamental modifications to our military,” Milley said, because the nation’s adversaries are seeking to deploy robotics and AI in the same manner, but with Americans in their sights.

The nation that is the first to deploy robotics and AI together in a cohesive way, he said, will dominate the next war.

“I would submit that the country, the nation-state, that takes those technologies and adapts them most effectively and optimizes them for military operations, that country is probably going to have a decisive advantage at the beginning of the next conflict,” Milley said.

The global consequences of such a shift in the character of war are difficult to overstate.

Milley compared the ongoing struggle to form a new way of war to the competition that occurred between the world wars.

In that era, Milley says, all the nations of Europe had access to new technologies ranging from mechanized vehicles to radio to chemical weapons. All of them could have developed the unified concept of maneuver warfare that replaced the attrition warfare that had defined World War I.

But only one, he said, first integrated their use into a bona fide new way of war.

“That country, Nazi Germany, overran Europe in a very, very short period of time ... because they were able to take those technologies and put them together in a doctrine which we now know as Blitzkrieg,” he said.
The Artemis robot (Advanced Robotic Technology for Enhanced Mobility and Improved Stability), created by UCLA researchers. (Courtesy of UCLA Robotics and Mechanisms Laboratory)
The Artemis robot (Advanced Robotic Technology for Enhanced Mobility and Improved Stability), created by UCLA researchers. (Courtesy of UCLA Robotics and Mechanisms Laboratory)

Blitzkrieg 2040

Milley, and the Pentagon with him, hopes to do the same now by bringing together emergent capabilities such as robotics, AI, cyber and space platforms, and precision munitions into a cohesive doctrine of war.

By being the first to integrate these technologies into a new concept, Milley said, the United States can rule the future battlefield.

To that end, the Pentagon is experimenting with new unmanned aerial, ground, and undersea vehicles and seeking to exploit the pervasiveness of nonmilitary smart technologies, from watches to fitness trackers.

Though the effort is just gaining traction, Milley has claimed since 2016 that the U.S. military would field substantial robotic ground forces and AI capabilities by 2030.
Just weeks from now, that idea will begin to truly culminate, when invitations from the Defense Department (DOD) go out to leaders across the defense, tech, and academic spheres for the Pentagon’s first-ever conference on building “trusted AI and autonomy” for future wars.
The Pentagon is on a correlating hiring spree, seeking to pay six figures annually for experts willing and able to develop and integrate technologies including “augmented reality, artificial intelligence, human state monitoring, and autonomous unmanned systems.”

Likewise, the U.S. Army Futures Command, created in 2018, maintains as a critical goal the designing of what it calls “Army 2040.” In other words, the AI-dependent, robotic military of the future.

Futures Command deputy commanding general Lt. Gen. Ross Coffman said he believes that 2040 will mark the United States’ true entry into an age characterized by artificially intelligent killing machines.

Speaking at a March 28 summit of DOD leaders and technology experts, Coffman described the partnership between man and machine that he envisions for the future, relating it to the relationship between a dog and its master.

Rather than having AI help soldiers get into the fight, however, Coffman said humans will be helping machines to the battlefield.

“I think we’re going to see a flip in 2040 where humans are doing those functions that allow the machine to get into a position of relative advantage, not the machine getting humans into a position of relative advantage,” he said.
A woman reaches to touch a robotic arm developed by the Johns Hopkins University Applied Physics Laboratory on display at the Defense Advanced Research Projects Agency (DARPA, the Pentagon's science research group) Robotics Challenge Expo at the Fairplex in Pomona, Calif., on June 6, 2015. (Chip Somodevilla/Getty Images)
A woman reaches to touch a robotic arm developed by the Johns Hopkins University Applied Physics Laboratory on display at the Defense Advanced Research Projects Agency (DARPA, the Pentagon's science research group) Robotics Challenge Expo at the Fairplex in Pomona, Calif., on June 6, 2015. (Chip Somodevilla/Getty Images)

‘Everything Spins Out of Control’

Remaking the U.S. military and forming a new, cohesive way of war is a tall order. It’s nevertheless one that the Pentagon appears prepared to pay for.
The DOD is requesting a record $1.8 billion in funding for AI projects for the next year alone. That amount will exceed the estimated $1.6 billion in AI investments being made by China’s military.
Much of it is also earmarked for initiatives to improve the decision-making of autonomous weapons systems.

The effort appears at the very least to be a real start toward Milley’s vision of fielding autonomous systems en masse. It also raises deep concerns about what the next war could look like and whether the very much human DOD leadership is adequately prepared for managing its autonomous creations.

John Mills, former director of cybersecurity policy, strategy, and international affairs at the Office of the U.S. Secretary of Defense, said he believes that this path is rife with the potential for unintended consequences.

“It is Skynet,” Mills told The Epoch Times, referencing the fictional AI that conquers the world in the movie “The Terminator.”

“It is the realization of a Skynet-like environment.

“The question is, ‘What could possibly go wrong with this situation?’ Well, a lot.”

Mills said that he doesn’t believe AI deserves all the mystique it’s been given in popular culture but that he is concerned about the apparent trend in military decision-making toward building systems with real autonomy—that is, systems capable of making the decision to kill without first obtaining human approval.

“[AI] sounds dark and mysterious, but it’s really big data, the ability to ingest and analyze that data with big analytics, and the key thing now is to action that data, often without human interaction,” Mills said.

The loss of this “man-in-the-loop” in many proposed future technologies is thus a cause for concern.

Training human beings to correctly identify between friend and foe before engaging in kinetic action is complicated enough, Mills said, and it’s much more so with machines.

“What’s different now is the ability to action these incredible data sets autonomously and without human interaction,” Mills said.

“The integration of AI with autonomous vehicles, and letting them action independently without human decision-making, that’s where everything spins out of control.”

To that end, Mills expressed concern about what a future conflict might look like between the United States, and its allies, and China in the Indo–Pacific.

Imagine, he said, an undersea battlespace in which autonomous submarines and other weapons systems littered the seas.

Fielded by Chinese, American, Korean, Australian, Indian, and Japanese forces, the resulting chaos would likely end with autonomous systems engaging in war throughout the region, while manned vessels held back and sought to best launch the next group of robotic war machines. Anything else would risk putting real lives in the way of the automated killers.

“How do you plan for engagement scenarios with autonomous undersea vehicles?” Mills said.

“This is going to be absolute chaos in subsurface warfare.”

U.S. carrier group in the Philippine Sea on Sept. 19, 2021. (U.S. Navy photo by Mass Communication Specialist 2nd Class Haydn N. Smith)
U.S. carrier group in the Philippine Sea on Sept. 19, 2021. (U.S. Navy photo by Mass Communication Specialist 2nd Class Haydn N. Smith)

Automated Killing

To be sure, preventing the automated killing of combatants by artificially intelligent systems is something the Pentagon has thought about for a long time.
The 2018 Artificial Intelligence Strategy, for example, sought to accelerate AI adoption across the DOD while seeking ethical approaches to “reduce unintentional harm.”
The 2020 Ethical Principles for Artificial Intelligence, likewise sought to ensure that only “trustworthy” and “governable” AI technologies were adopted by the military.
The 2022 Responsible Artificial Intelligence Strategy and Implementation Pathway (pdf), meanwhile, outlined a plan to mitigate potential unintended consequences of the deployment of AI in military systems.

None of these efforts, however, actually will prevent the adoption of fully autonomous killing machines. And they were never intended to.

That’s because all such documents were crafted under the guidance of DOD Directive 3000.09 (pdf), the Pentagon’s guiding document for the development of autonomous weapons systems.

“That’s foundational,” Mills said of the document. “It’s very important because it drives development.”

Originally issued in 2012, the document just received a major overhaul in January, meant to prepare the Pentagon for what DOD Director of Emerging Capabilities Policy Michael Horowitz said at the time was a “dramatic, expanded vision for the role of artificial intelligence in the future of the American military.”
There is just one caveat to that ethical, trustworthy, governable, deployment of lethal AI systems: The Pentagon doesn’t have any hard and fast rules to prohibit autonomous systems from killing.

Although 3000.09 is often referenced by proponents of man-in-the-loop technologies, the document does not actually promote such technologies, nor does it prohibit the use of fully automated lethal systems.

Instead, the document outlines a series of rigorous reviews that proposed autonomous systems must go through. And although no independent AI weapon systems have made it through that process yet, the future is likely to see many such systems.

This is in no small part to the fact that China’s communist regime is rapidly working to field its own automated killing machines, and the DOD will have to prepare to meet that threat head-on, all the while attempting to retain American values.

“[China is] trying to address these hard problems also, of allowing [AI] to engage without human intervention,” Mills said.

“I think their proclivity is to allow it even if they accidentally kill their own people.”

To that end, the next war may well be one fought primarily between artificially intelligent robots, with human handlers standing at the sidelines, trying their best to direct the action.

Whether the United States can manage that without losing control of its creations remains to be seen.

Mills said he is hopeful that if anyone can do it, it’s the United States. After all, he said, we have the best human talent.

“I think we still have enough guardrails where it will be iterative, so that we can become smarter and learn to build into the algorithms precautions and control measures,” Mills said.

“I think we have good teams and people in place.”

The Pentagon didn’t respond by press time to a request by The Epoch Times for comment.

Andrew Thornebrooke is a national security correspondent for The Epoch Times covering China-related issues with a focus on defense, military affairs, and national security. He holds a master's in military history from Norwich University.
twitter
Related Topics