AI on the Battlefield: Speed, Accountability, and the Governance Gap
Militaries have always adopted new technology before the rules catch up. Gunpowder, aerial bombardment, nuclear weapons, each forced a belated scramble to write laws around capabilities already in the field. AI is doing the same thing, but faster, and with a twist: this time, the system making the targeting decision might not be a person at all.
That is not a hypothetical. Lethal Autonomous Weapons Systems (LAWS), weapons that can select and engage targets without a human pulling the trigger, are already in development across multiple nations. The strategic case for them is straightforward: AI processes sensor data and executes decisions in milliseconds, far faster than any human operator. The problem is that speed is also the argument against them.
Nobody to Hold Responsible
International humanitarian law requires that someone be accountable when civilians are killed in a conflict. A soldier fires a weapon. An officer gives an order. A commander approves a target list. There is a chain of human decisions, and at the end of it, a person who can be held responsible under the laws of war.
AI breaks that chain. Research by Susannah Kate Devitt on the normative epistemology of LAWS identifies the core problem: when an autonomous system makes a targeting decision, it is genuinely unclear whether responsibility sits with the soldier who deployed it, the engineer who designed it, or the commander who authorized its use. Current IHL does not resolve this. The gap is not a technicality. It means that civilian deaths caused by autonomous systems may produce no accountability at all.
A 2024 paper presented at the ICRC's 34th International Conference reinforced this point: existing legal frameworks were built around human decision-makers, and retrofitting them to cover AI systems requires more than minor amendments. It requires rethinking what accountability even means when the actor is a trained model running on military hardware.
The Speed Trap
Supporters of autonomous weapons often argue that AI removes human error and emotional bias from targeting decisions. There is something to this. A system that doesn't panic, doesn't hold grudges, and can cross-reference a target against thousands of data points in real time sounds appealing compared to a soldier making a split-second call under fire.
The problem is that "meaningful human control", the standard international frameworks use to ensure accountability, requires time humans no longer have. Devitt's 2023 paper on advance control directives argues that as AI compresses decision cycles to sub-second timescales, genuine human command authority becomes structurally impossible. A human who clicks "approve" on a target list generated by an AI system three hours ago is not exercising meaningful control over what happens when the weapon deploys. They're rubber-stamping a process they cannot realistically supervise.
Van Diggelen and van den Bosch's 2023 work on human-machine military teams adds another layer: the interface design of these systems actively shapes how much control humans retain. Build a system where the AI presents one recommended target with a ten-second confirmation window, and you have engineered meaningful human control out of the process while maintaining plausible deniability that a human was involved.
Good Soldiers, Impossible Positions
One of the more uncomfortable findings in this body of research concerns ethical soldiers who are trying to do the right thing.
Devitt's "bad, mad, and cooked" framework argues that AI-augmented military environments can place morally responsible individuals in positions where they cannot discharge that responsibility. Accountability diffuses across the human-AI team. No single person made the fatal decision. Everyone followed the process. The process killed civilians. This is not a failure of individual ethics, it is a failure built into the system by design, whether intentional or not.
Islam and Wasi's 2024 framework paper on military AI and human rights makes the stakes concrete: AI deployed in civilian-dense or ambiguous environments without adequate human oversight introduces systemic risk of human rights violations. Precision targeting that is precise in normal conditions becomes unpredictable at the edges, and warfare happens at the edges constantly.
The Battlefield Comes Home
Not all military AI fires weapons. Some of it fires words.
Cognitive warfare, the use of information operations to undermine an adversary's will to fight or a population's trust in its institutions, is as old as war itself. AI has scaled it to a level that human defenders struggle to match. Foreign influence operations on social media can now generate and distribute targeted disinformation at speeds and volumes that overwhelm manual moderation.
Van Diggelen and Aidman's 2025 paper on AI-enabled countermeasures to cognitive warfare acknowledges the bind: countering AI-generated disinformation at scale requires AI-based defenses. Deploying those defenses raises its own concerns about automated content moderation, censorship, and who controls what the system flags as a threat. Democratic societies face a genuine tension between protecting themselves and protecting the open information environments that define them.
Governance Is Running Behind
Some countries are taking the governance challenge seriously. Australia's approach, documented by Devitt and Copeland in 2021, mandates Article 36 reviews for all new AI-based weapons systems, a legal review process designed to check whether new weapons comply with international humanitarian law. Australia also emphasizes sovereign AI military capability, meaning it wants to develop and control its own military AI rather than rely on foreign technology.
That second priority illuminates the broader problem. Nations are racing to develop military AI independently, which is precisely the dynamic that makes international governance so difficult. Every country that builds its own autonomous weapons capability outside a shared framework reduces the leverage any framework would have. The UN Group of Governmental Experts on LAWS has been deliberating since 2014 and has not produced a binding treaty. Autonomous weapons have continued to proliferate regardless.
An ungoverned AI arms race is not a future scenario to model. It is the current situation, measured in deployment decisions being made right now in Washington, Beijing, Moscow, and a growing list of others.
The research is consistent on what is needed: clear legal definitions of accountability for autonomous targeting decisions, enforceable requirements for meaningful human control, binding international limits on LAWS deployment in civilian environments, and governance structures that can move faster than the procurement cycles they are supposed to regulate. Achieving any of that requires political will that has not materialized at the international level. For now, the systems are ahead of the rules, and the rules are ahead of the enforcement.