IHEDN Mondays are now available in audio format!
Click below to listen:
In the Russian-Ukrainian war, Operation Spider's Web, carried out in June 2025 using open sources, left a lasting impression. Prepared for over a year under the direct supervision of President Zelensky, the operation consisted of coordinated strikes by kamikaze drones launched deep into Russian territory from «Trojan Horse» trucks that had left Ukraine. These strikes destroyed or seriously damaged around 40 Russian aircraft, including many strategic carriers such as very long-range bombers (Tupolev Tu-22M, Tu-95 and Tu-160), at five Russian air bases. Among them was the Belaya base, located near Irkutsk, north of the Mongolian border... some 6,000 kilometres from Kiev!
To compensate for the possible loss of data flow (particularly in the event of jamming preventing the devices from reaching their targets) during this long-range drone attack, Ukrainian forces equipped the kamikaze drones with AI, after training it on full-scale Russian bombers on display in a museum in Kiev. The idea was simple: if human control of the drone pilots was lost, the AI would take over to direct the drones to their target, a pre-identified aircraft. But a problem quickly arose: unable to integrate the unexpected presence of civilians (runway personnel, mechanics, firefighters, etc.) on the ground during training, the AI continued its mission without taking into account the potential collateral damage.
This example perfectly illustrates the current dilemma: AI can increase military effectiveness, but it cannot (yet?) anticipate all unforeseen situations. Above all, it raises a key question: who is responsible in the event of a violation of international humanitarian law (IHL), the legal framework governing armed conflicts?
FUNDAMENTAL LEGAL CHALLENGES
AI promises to speed up military decision-making, but it can also undermine the application of the fundamental principles of IHL: distinction between civilians and combatants, proportionality of attacks and precaution in the conduct of operations. A chief commissioner of the armed forces, legal adviser to operations (LEGAD, in military jargon) of the armed forces staff (EMA), details this challenge. posed by AI :
Socio-economic disparities between the various countries can also explain brigandage at sea, the officer explains:
«A fully autonomous system is not possible. It would be unlawful and would place us in breach of our treaty obligations. The notion of «appropriate» human control is central to France. The challenge is that the assistance provided by AI in military decision-making through the processing of a significant flow of information must not overshadow the need for human analysis.
What could be done manually (by humans) would take much longer, and time is a critical factor. But we must not take the easy option and allow the use of AI to lead to a form of intellectual laziness, especially when we are under pressure in the circumstances of a high-intensity conflict.»
In other words, delegating the decision to use lethal force entirely to an autonomous machine would amount to disregarding the legality review required by IHL before each engagement, which would create a risk of decision-makers feeling less accountable.
But the danger is more insidious: AI must not become a crutch for decision-makers. «Anything that speeds up decision-making must not translate into intellectual laziness and a lowering of the critical level of analysis of legality,» adds the LEGAD of the French General Staff. Behind this warning lies the fear that the speed offered by AI will push decision-makers to mechanically validate options without exercising the necessary discernment, especially in contexts of operational pressure.
A civil lawyer and legal adviser on the law of armed conflict at the EMA adds:
«There are two points: on the one hand, the use of decision support systems, and on the other hand, what this means in terms of loss of control, overconfidence or over-caution.»
In other words, AI can be useful, but it should never replace human judgement. The real challenge is therefore one of balance: how can we harness the computing and analytical power of AI without becoming overconfident and delegating too much, or distrustful and failing to exploit its advantages?
The underlying issue is clear: AI creates tension between operational efficiency and legal requirements. The more it speeds up decision-making, the more it risks reducing the space for critical reflection that is essential for compliance with IHL.
IN LEGAL RESPONSIBILITY, HUMAN BEINGS REMAIN AT THE CENTRE
The question of responsibility is central, as it touches on the heart of IHL: who is accountable when civilians are affected by an attack? Examples from Ukraine, such as Operation Spider Web, show that AI-driven drones can continue their mission despite the presence of civilians, without considering the «excessive» nature of collateral damage, because this scenario has not been incorporated into their programming. This inability to anticipate all unforeseen situations reveals a fundamental limitation: AI cannot examine all phases of targeting on its own.
Indeed, «this should not change the manager's responsibility.», indicates the EMA's LEGAD. Even if AI takes over, it is still the human authority that must answer for the consequences. all the more so, because AI is not in itself a legal entity: it has no legal personality. In other words, AI can execute, but it cannot assume responsibility. Responsibility remains inseparable from the chain of command, as it is this that guarantees the legitimacy and legality of operations.
The legal adviser agrees:
«The objective is not to affect the attribution of liability and not to limit all forms of liability.»
It is therefore necessary to maintain a feedback loop: checking after each engagement whether the target was lawful and providing feedback to the system in order to preserve human control. This requirement is not only technical, it is also legal: it preserves human control and prevents AI from becoming a grey area where responsibility is diluted.
The issue is therefore twofold: on the one hand, AI increases the complexity of operations and multiplies the risks of failure; on the other hand, it must never be used as an excuse to evade IHL obligations. In the event of a violation, it will not be «the machine» that will be judged, but rather the human decision-makers – military leaders and political authorities – who chose to use it.
ADOPT NEW RULES OR ADAPT EXISTING LAW?
The rise of AI in armed conflicts has reignited an old debate: should specific legal instruments be created to regulate autonomous systems, or should existing rules suffice? Some argue for a new international treaty, believing that the Geneva Conventions, the basis of international humanitarian law drafted in the mid-20th century, are no longer adequate.e century, cannot anticipate the challenges posed by technologies capable of learning and making decisions in real time. The argument is appealing: a framework ad hoc would provide a clear and appropriate response. However, it also carries a major risk: that of fragmenting the existing body of law and undermining its authority.
The French position, as reiterated by the advisor to the Armed Forces Staff, is clear: «The Geneva Conventions and their Additional Protocols are broad enough to adapt to new systems. » IHL is therefore not set in stone; it has already demonstrated its ability to evolve and apply to new contexts. Creating a specific rule could open the door to differences in interpretation, or even political disagreements, which would weaken the binding force of IHL.
LEGAD shares this pragmatic approach:
«It is better to focus on what we have, pushing the interpretation of texts in manuals, conferences, etc. to the maximum. This is the most realistic approach, especially since the possibility of concluding new universally recognised instruments is very uncertain in the current geopolitical context.»
The priority is not to reinvent the law, but to strengthen its practical application: training military personnel, developing practical guides, and increasing the number of international forums to clarify the interpretation of existing rules.
For France, as we can see, it is better to consolidate than to disperse. The challenge is therefore not the absence of rules, but their effective implementation in the face of technologies that are evolving faster than diplomatic negotiations.
AI AS A FACTOR IN COMPLIANCE... UNDER CERTAIN CONDITIONS
AI is sometimes presented as a solution to human limitations: reducing errors, neutralising biases, and enabling the processing of massive volumes of data. In theory, it could therefore strengthen compliance with IHL. However, this promise can only be fulfilled on one condition: maintaining appropriate human control.
The chief commissioner at headquarters emphasises this point:
«There needs to be «appropriate» human oversight, the details of which can be discussed. According to French doctrine, AI can be an asset in terms of targeting processes, for example, provided that this «appropriate» human oversight is in place.»
According to him, AI can speed up the processing of low-risk targets such as clearly identified and isolated military objectives, without automating them, and free up time for human analysis to focus on the most sensitive cases, such as targeting potentially dual-use assets, particularly in urban areas. But he warns against any temptation to delegate critical analysis: AI must remain a tool, not a decision-maker.
There are also positive uses, as highlighted by the EMA's civil adviser: «One example is mine clearance using AI, which contributes to security in armed conflicts.» AI can thus help protect civilians by reducing the dangers associated with mines through improved mapping of sensitive areas.
The issue is therefore twofold: AI can be a factor in compliance if it is used to reinforce human vigilance, but it becomes a risk if it is perceived as a guarantee in itself. In other words, AI does not correct flaws in human judgement, it merely shifts them: if poorly supervised, it can introduce new biases, overlook unforeseen situations and undermine the principles of distinction and proportionality.
A PERMANENT TENSION BETWEEN OPERATIONAL EFFICIENCY AND LEGAL REQUIREMENTS
Upstream of the targeting cycle, LEGAD points out that AI can help establish a shortlist of targets to avoid striking targets protected by IHL, such as dangerous structures or religious buildings, for example. But here again, the machine only compiles and cross-references data; it is humans who must make the decisions. in fine. The real question is therefore: do we want AI that assists in ensuring compliance with IHL, or AI that claims to guarantee it? In the first case, it is an asset. In the second, it becomes a danger.
The integration of AI into armed conflicts reveals an ongoing tension between operational efficiency and legal requirements. AI can speed up decision-making, improve compliance and reduce certain risks, but it cannot anticipate all non-compliant cases or assume responsibility for the consequences.
Human oversight remains essential for legal, operational and ethical reasons: it ensures compliance with IHL and carries, in fine, responsibility for the choices made.
The challenge is therefore not to replace humans with machines, but to ensure that AI remains a tool at the service of the law, rather than a means of evading legal obligations.