fb tracking

Deadly and Imminent: The Pentagon’s Mad Dash for Silicon Valley’s AI Weapons

Report Summary

  • One year into its flagship Replicator initiative, the Department of Defense has still not clarified whether it is fast-tracking AI-empowered weapons designed to kill on the battlefield.
  • The uncertainty around the program’s parameters exists by design, as the Pentagon cultivates strategic ambiguity.
  • The Pentagon wants to prove that it can innovate quickly. To do this, it has developed a program with few administrative hurdles, lower testing thresholds for new weapons systems, and minimal public inquiries into its undertakings that might slow down its procurement processes.
  • All signs point to the Pentagon developing “killer robots” via Replicator, despite deflections from Pentagon representatives themselves.
  • The uptick in Silicon Valley and emerging technology firms’ engagement in the defense sector is a clear sign of what’s to come for Pentagon contracting.
  • Policymakers keen to speed-race the acquisition of AI-empowered military technology and beat adversary states must pause to consider the long-range strategic and ethical consequences of introducing killer robots to global warfighting.
  • The risks include higher death tolls in war, indiscriminate targeting, and greater disconnects between military operators and the end results of the decisions they take when using AI technology – as well as an autonomous weapons arms race that could directly undermine U.S. security objectives.
  • The report recommends that Deputy Secretary of Defense Kathleen Hicks and Secretary of Defense Lloyd Austin must clarify that the AI weaponry procured by Replicator will not be empowered to kill on the battlefield.
  • Following this, Pentagon leadership must establish and articulate the agency’s protocols to ensure responsible authority over emerging artificial intelligence in the U.S. military.
  • Finally “investing in AI” cannot be used as a carte blanche to justify ever-increasing Pentagon spending.

 

What is Replicator?

In August 2023, Deputy Secretary of Defense Kathleen Hicks announced a new Pentagon initiative called Replicator. The initiative is a flagship of the Defense Innovation Unit within the Department of Defense, a team tasked with fast-tracking the development and acquisition of new national security technologies for the Pentagon. According to the Pentagon’s official statements, the program has a two-year mandate to field hundreds of attritable, all-domain weapons systems, in order to hamstring China on its Eastern coast and enable the U.S. to better defend Taiwan from an invasion by China. “Attritable” is Pentagon-speak indicating that the weapons can be produced at low cost and can be replaced inexpensively if downed in combat. Essentially, Replicator aims to source and scale production of small, low-cost weapons (many of which are drone units) capable of swarming a shoreline or set of targets.

The program also seeks to showcase the Pentagon’s growing ability to conceptualize and procure new technologies and weapons systems rather than spending years in development and administrative bottlenecks. Replicator is now more than halfway through its lifecycle, with the Pentagon publicly proclaiming it a great success on track to completely deliver on its mandate.

In August 2024, Hicks gave a speech to the National Defense Industrial Association conference tied to the first anniversary of her initial Replicator announcement, in which she touted Replicator’s early successes and proactively zinged naysayers who she claims call Replicator “wishful thinking,” “impossible to achieve,” and “a fantasy.” Instead, she asserted, it is one of the most ambitious early-stage successes the Pentagon has ever undertaken, likening the effort to the moral imperative to innovate and produce war planes during World War II.

Throughout the speech, she lauded that the Pentagon has “identified and validated key operational requirements from combatant commands” in the 11 months following the start of the department-wide effort and “selected initial capabilities to meet those demands, from a field of nominees across multiple domains, harnessing the very latest in technology.” She also confirmed that the Pentagon “secured needed funding for fiscal year 2024” to the tune of about $500 million and has budgeted “a similar amount for fiscal year 2025,” bringing the total program cost to approximately $1 billion over its projected lifecycle.

She acknowledged that many of the projected weapons systems have already been purchased, including Switchblade 600, “a loitering munition that can be launched not just from land, but also from ships and aircraft” and counter-drone capabilities. Other systems selected for mass production in the first tranches of Replicator funding include unmanned surface vessels (USVs) and interceptors that operate at various ranges, according to DefenseScoop reporting.

Then, she spoke about the depth of the Pentagon’s Congressional advocacy on Replicator, highlighting that the Department of Defense has “done nearly 40 Hill briefings since last October, averaging about one a week” and “conducted scores of briefings to Congressional committees, members, and staff.” This level of effort to get Congressional buy-in on a program is unusual, especially given that Replicator’s budget is small relative to other DoD procurement programs. This centrality and urgent need to get Congress in lockstep indicates that top Pentagon brass sees Replicator support as one of its top strategic priorities.

The Pentagon has been much less forthcoming about exactly what Replicator will do. The core of the plan appears to be to develop the capacity to launch a “drone swarm” over China, with the number of relatively low-cost drones so great that some substantial number will evade China’s air defenses.

The risks of drone swarms, if they are in fact technologically and logistically achievable, are enormous. The sheer number of agents involved would make human supervision far less practicable or effective. Additionally, AI-driven swarms involve autonomous agents that would interact with and coordinate with each other, likely in ways not foreseen by humans and also likely indecipherable to humans in real-time. The risks of dehumanization, loss of human control, attacks on civilians, mistakes, and unforeseen action are all worse with swarms.

The Pentagon considers Replicator’s first year such a success that it has become the template for future roll-outs of new technologies and weapons. At the end of September 2024, Secretary Lloyd Austin wrote in a memo given to press that the next phase of the Pentagon’s drone warfare efforts will be called “Replicator 2.0” and that this wave of the initiative will focus on counter-drone technologies, specifically tech that can detect, track, and destroy enemy drones.

However, many details of how these systems will be used once produced remain classified and off-limits to the public. Much of what is known about Replicator contracts comes from sources within the Pentagon speaking on the condition of anonymity or from the contractors themselves in press releases or formal announcements.

The Pentagon has declined to reply to a letter from 14 civil society organizations, including Public Citizen, requesting clarification on whether Replicator will involve the use of autonomous, lethal force. Most of the publicly available information about Replicator comes from Hicks’ sparse public speeches, defense reporting, and the companies currently securing and fulfilling Replicator contracts.

Although Hicks and others cite a strategic imperative not to tip off adversaries as the primary reason why information about these systems is kept relatively secret, it is not acceptable for the U.S. military to refuse to specify to the American people if and how it will use autonomous weapons. Knowing that Replicator is the initial frontier into the U.S. military’s use of autonomous weaponry, the lack of transparency surrounding the program is even more troubling.

It is not yet clear whether or not these technologies are designed, tested, or intended for killing. While the Pentagon itself has stayed close-lipped, contractors receiving Replicator grants have begun to hint that the autonomous use of lethal force is the plan. Palmer Luckey, the CEO of the Replicator-funding-recipient Anduril tasked with producing autonomous missiles and other AI weaponry, spoke out with clarity at an event at Pepperdine University in early October 2024, exalting himself as a part of a “warrior class that is enthused and excited about enacting violence on others in pursuit of good aims.” He went on to say that “[society] need[s] people like me who are sick in that way and who don’t lose any sleep making tools of violence in order to preserve freedom.”

The Pentagon must clarify whether the Department of Defense is creating autonomous weapons that will be lethally deployed without human control and commit to safeguards around when and how artificial intelligence systems will be used in future warfare.

All AI, Everywhere

In general, it is not unreasonable for the Pentagon to explore or begin experimenting with AI technology applications in its operations. Nor is the Pentagon alone in seeking advantageous ways to apply AI to its bottom line. Nearly every sector is experiencing an AI “gold rush” at present, with civil servants and corporate actors alike seeking ways to integrate AI capabilities into their existing workstreams and products. At the same time, many of these organizations are working to preempt the foreseeable risks of adopting these new technologies.

In the federal government, agencies are weighing how AI can and should be used for collecting intelligence, batch-processing data, and automating existing workflows. However, there are ample downsides to adopting AI too soon, particularly without regulations, or without complete information about the attendant risks. From political deepfakes generated and deployed to sway elections, to algorithmic bias in facial recognition technologies and data sorting, to chatbots that “hallucinate” or provide incorrect information on demand, the real-world implications of deploying AI that is not yet reliable or operational are stark.

The Pentagon is the most consequential place for these possible pitfalls to take hold. Empowering the world’s largest and costliest warfighting machine with early, poorly-vetted versions of this technology may prove catastrophic for military strategy, battlefield targets, and humanity writ large.

Even still, top military and intelligence brass are certain that AI is what’s next for war. Former CIA Director David Petraeus spoke to Axios leadership about this shift in December 2023, predicting that the next frontier of warfare is for humans to be “on the loop rather than in the loop.” He continued, “You will have a human at some point say: ‘OK, machine. You’re free to take action according to the computer program we established for you’ – rather than remotely piloting it.”

He is not alone in that perspective – retired generals, current military officials, and industry figureheads have all insisted that “The Future of War” is synonymous with “AI in Wartime.” National security strategists with an eye for new developments in warfighting fear that without an AI cutting edge, the Pentagon would merely be “buying hardware for the last war when it should be buying more software for the next one.”

But a wholesale jump into creating and deploying killer robots could not be more ill-advised. “Do we need to find – or should we find – a more cost-effective way of downing, say, an inexpensive drone?” asked the Director of Surface Warfare on the Chief of Naval Operations’ staff at a Center for Strategic and International Studies event this spring. Top Replicator proponents certainly believe that thinking “small, cheap, many, and mighty” seems to be the way to address this question. However, taking down drones inexpensively does not have to, nor should it, also mean empowering deadly, fighting machines to kill on their own without a human decision-maker at the helm.

“A Man in the Loop”

Many elements of the Pentagon’s protocols for AI weapons have yet to be determined or publicly announced. It is well-documented and often cited that current Department of Defense policy requires a human “in the loop” for all essential warfighting decisions. Most notably, the Pentagon’s DoD Directive 3000.09 specifies that “autonomous and semi-autonomous weapon systems will be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

The language of the directive is deceptively comforting, implying more human control over AI weapons decisions than the words actually convey. As industry and military forces lobby for rapid development and deployment of autonomous weapons, DoD Directive 3000.09 will exert little restraint. The directive’s key phrase of “appropriate levels of human judgment over the use of force” is undefined and completely amenable to interpretations enabling autonomous weapons use. Keeping humans “in the loop” on the use of lethal force does not mean that an identifiable person will authorize each and every exercise of deadly force. It may simply mean that a human decided to deploy an autonomous weapon and that a person has some responsibility for monitoring, at some general level and after the fact, the autonomous weapons’ performance, as Petraeus’s comment above illustrates.

Autonomous warfighting expands the number of human casualties that may occur, greatly increases the risk that incorrect targets will be attacked, puts civilians in harm’s way at higher rates, and increases the likelihood that military personnel relying on algorithms to generate a target lists will experience a sense of emotional and moral disconnect from the attack they are approving.

If soldiers or authority figures within the U.S. military can strip themselves of personal authority or accountability for the decision to kill, whether by blaming an AI system that “told them” to or authorizing mass strikes without reviewing the suggested system’s targets, war becomes deadlier and less humane than it already is.

Furthermore, if the United States fails to demonstrate a commitment to ban killer robots, it communicates to both adversarial states and the global community that all bets are off. If the largest military in the world refuses to commit to using these technologies responsibly, there may be no upward limit to the dangerous ways international actors could weaponize AI.

As AI and warfare researcher Brianna Rosen says succinctly: “The most immediate threat is not the ‘AI apocalypse’ – where machines take over the world – but humans leveraging AI to establish new patterns of violence and domination over each other.

U.S. Efforts at the Global Helm

Devoted international advocates have been working to prevent the proliferation of killer robots for over a decade via the Stop Killer Robots campaign, a coalition focused on creating and supporting an international treaty instrument to govern the development and deployment of autonomous weapons. Public Citizen is a member of this coalition.

Thus far, the United States has not committed to or openly supported the treaty process. Doing so immediately would signal a serious commitment to utilizing AI technology correctly and with discretion, underscoring a commitment to preserving human life and upholding international humanitarian law.

That said, the United States also has a much broader role to play in preventing the widespread introduction and use of autonomous weapons. Its efforts could begin at home, with clarity and commitment around Replicator and any subsequent programming. If the U.S. military were to commit on its own to a ban on AI-powered killer robots, it would set the tone for global conduct using these technologies going forward.

The Usual Suspects and Then Some

For every reason outlined above, this moment is red hot for determining what comes next for autonomous weaponry. The Replicator initiative and its bearing on the Pentagon’s next decisions could not be of higher importance.

Yet military-industrial power players are driving a reckless rush to develop and deploy autonomous weapons. If the Silicon Valley startups and established defense contractors like Lockheed Martin, Raytheon, Northrup Grumman, and Boeing have one thing in common, it’s that their primary incentive is securing the largest slice of the Pentagon’s nearly $900 billion annual budget as possible.

The relatively new actors in the defense space are ardent advocates for more and faster spending on AI weaponry, regularly lamenting their inability to compete with established industry insiders’ massive lobbying infrastructure, longstanding relationships with members of Congress and integration into the defense acquisition process. For their part, the prime contractors say the hype around AI is leading to premature deals with Silicon Valley companies. The overall effect of the finger-pointing seems to be to drive overall Pentagon spending still higher.

Heidi Shyu, the Pentagon’s chief tech officer, told Axios in August 2024 that new entrants to the military-industrial contracting space are “nipping at the heels, I tell you. I have traditional defense contractors say, ‘Hey, this isn’t fair.’”

In some cases, AI weapons makers seem to relish their role in escalating robotic warfare. Palmer Luckey, the 32-year-old billionaire founder of Anduril, one of the firms rapidly scaling up and securing Replicator contracts, tweeted with seeming pride that he did not see the United States as the “world’s policeman” but rather its “gun store.”

Anduril recently flew a half-dozen reporters to its Texas test site to showcase its new developments. On the trip, Colin Demarest at Axios wrote that Anduril “showed how a single person familiar with Siri and armed with a laptop could govern a clutch of jet-powered drones” and flexed how an AI “commander” was able to “[oversee] a team of midsize drones as they took off, circled up, patrolled the area and downed a simulated enemy aircraft.”

Putting aside the question of whether or not these AI weapons can be made to work as intended, elected officials, public servants, and servicemembers need to ask themselves if they should be. A company like Anduril, which hired more than 1,000 employees in nine months as it prepares to fulfill a contract for unmanned fighter jets and recently announced that it had raised $1.5 billion in Series 5 funding to “hyperscale defense manufacturing,” will always say yes.

The Department of Defense must establish safeguards for its own operations before it’s too late. Voices insisting that lucrative and technologically iterative AI weapons systems are beyond reproach must be treated with extreme scrutiny.

An Urgent Responsibility

Public Citizen repeats its urgent call for the Pentagon to clarify whether or not the Replicator program is creating or purchasing weapons with the intent to autonomously deploy them to kill human beings.

In addition to clarifying whether or not Replicator-produced weapons systems are being designed to kill on the battlefield, members of Congress and operators within the Pentagon alike should demand comprehensive guardrails and transparent information about the scope and targets of autonomous weapons systems used by the U.S. military.

Policies about when, where, and how autonomous weapons systems will be deployed are of essential concern to the American public and must be immediately shared by the Department of Defense. In addition, guidance like the DoD Directive on Autonomous Weapons must be tightened and codified into law and not left in the hands of an agency that could very easily adjust its own policies as killer robot technologies become more prominent and dangerous.

Finally, “artificial intelligence” should not be used as a catch-all justification to summon billions more in Pentagon spending, especially when the existing annual budget for the U.S. military already dwarfs every other U.S. agency and is careening towards the $1 trillion mark.

The capacity to develop weapons for AI-empowered warfare is not science fiction, it is here. The Pentagon owes Americans clarity about its own role in advancing the autonomous weapons arms race via Replicator, as well as a detailed plan for ensuring it does not open a Pandora’s Box of new, lethal weapons on the world by refusing to hold its own operations accountable.