IHEDN Mondays are now available in audio format!
Click below to listen:
Artificial intelligence (AI) is emerging as a strategic pivot in the transformation of modern armies, particularly in command functions. Faced with increasingly complex operational environments and an explosion in information flows, AI makes it possible to process massive amounts of data in real time, identify weak signals and propose appropriate options for action.
In command and control (C2) systems, it does more than simply automate: it redefines roles and responsibilities, turning the military leader into an enlightened supervisor, capable of interacting with intelligent systems while retaining control of judgement. This hybridisation of human and artificial intelligence paves the way for more agile, more responsive, but also more demanding command.
In an interview with IHEDN, Lieutenant General Bruno Baratz, Commander of Future Combat, looks at the transformations brought about by artificial intelligence in command systems, and analyses the operational, ethical and strategic issues involved in integrating this technology into the conduct of operations.
General Baratz has spent his career at the heart of land operations, from his early days with 1er regiment parachutiste d'infanterie de marine (RPIMa) in Bayonne to commanding the French forces in the Sahel. An expert in jungle combat and a graduate of HEC in international risk management, he has alternated between field missions - in Bosnia, Afghanistan and French Guiana - and strategic responsibilities in Paris, notably in the office of the Chief of Staff of the French Army. As head of Operation Barkhane in the Sahel in 2022, he steered its transformation into a partnership-based operation. Since August 2023, he has headed the Future combat commandThis new body is dedicated to developing the army's capacity for innovation, and reports directly to the army's Chief of Staff.
HOW DO YOU SEE THE INTEGRATION OF IA INTO C2 SYSTEMS? IS IT SIMPLY A TECHNOLOGICAL EVOLUTION OR A CHANGE IN THE WAY OPERATIONS ARE CONDUCTED AND THE NATURE OF COMBAT?
AI in C2 systems is first and foremost an imperative. On the battlefield, the speed of operational tempo is a major asset in achieving tactical superiority.
The use of algorithms will support the leader in managing the massive flow of data on the battlefield, in fine, decide faster than the opponent. This decision-making aid, which guarantees the tempo and speed of the manoeuvre and strikes, can take different forms:
- Acceleration of certain phases of the Method for Developing a Tactical Operational Decision (MEDOT) (e.g. field studies) and planning;
- Proposed mode of action ;
- Proposed orchestration of resources for greater operational efficiency ;
- Data sorting and processing.
This major technological development has led to significant gains in three areas:
- Speed up analysis, decision-making and targeting.
- A multiplication of effects thanks to the ability of the same operator to control several sensors or several end-effectors simultaneously.
- Resilience thanks to predictive maintenance and logistics systems and C2 systems that are more optimised and therefore potentially less vulnerable.
However, while AI provides a clear comparative advantage in decision-making, it has not fundamentally changed the nature of combat. In the chaos of war, algorithms make it possible to reduce complexity, lower the cognitive load and offer options, but war remains the art of tactics, surprise and the clash of wills.
TRANSFORMING THE DECISION-MAKING PROCESS
HOW DOES IA CHANGE THE STAGES OF THE OODA LOOP (OBSERVE, ORIENTATE, DECIDE AND ACT) IN THE FIELD? WHAT MAJOR BENEFITS AND RISKS HAVE YOU IDENTIFIED?
By reducing the amount of analysis required, managers can regain time for conception and reflection. By speeding up processes, the manager can be in a position to understand changes in the situation more quickly and therefore seize more opportunities.
The OODA loop is then greatly accelerated towards dynamic targeting which, by cross-referencing the data provided by all the sensors, makes it possible to strike at the enemy's attempts to concentrate forces and priority targets in a timeframe that we have not experienced until now.
There are several types of risk, the main one being absolute confidence in the result produced by the algorithm. This confidence must systematically be qualified by an awareness of possible failures linked to the reliability of the data used and the vulnerability of the networks. Furthermore, blind confidence in a statistical result would weaken the ability of managers to make decisions in degraded mode, which is a more than plausible scenario in an environment subject to violent confrontation in the electromagnetic field.
HOW IS IA TRANSFORMING THE ROLE OF THE LEADER IN THE CHAIN OF COMMAND? WHAT SKILLS ARE BECOMING PRIORITIES IN THIS NEW ENVIRONMENT?
AI will not fundamentally alter the central role of the leader in the chain of command, as long as decision-making - and the responsibility that goes with it - remains his or her responsibility.
On the other hand, this technology will be central to the planning phase (MEDOT, drafting of orders), the conduct phase (manoeuvre plan, orchestration of sensors/effectors, acceleration of the OODA loop), and the post-operation RETEX (feedback) phase (dedicated database). The leader, like his staff, will need to master the proper use of AI - i.e. be aware of its limits and know how to ask it the right questions - to get the most out of it.
What's more, we will have to be prepared to change our organisations and processes if we are to reap the full benefits of the acceleration it offers. This means concentrating human intelligence on the tasks of understanding and reflection, and letting AI handle the tedious parts of the analysis, with no real cognitive added value.
RELIABILITY AND SPEED - THE CENTRAL DILEMMA
HOW DO YOU ASSESS THE RELIABILITY OF IA SYSTEMS IN AN OPERATIONAL SITUATION? WHAT ARE THE LIMITATIONS IN DIFFERENT OPERATIONAL CONTEXTS?
An AI is an algorithm that operates on a more or less large mass of data. Both the centralisation of the data and the algorithm represent the two vulnerabilities of AI: if one or the other is poisoned (" data poisoning "). This is why physical access during design and maintenance represents a vulnerability. We need to be aware of these limitations and guard against them by regularly checking the quality of the results.
The main limitation in a land-based environment is energy. On an unstructured battlefield, we have to be able to keep all our systems running and an algorithm, if it has to process a very large mass of data, is particularly energy-intensive. It is therefore imperative to start by analysing the data on the battlefield to determine what is really useful, so that it can be used on the spot or brought up and processed. The frugality of AI is a major challenge in land combat, where the need for discretion in the combat zone means that logistics cannot be overloaded, and even less so when it comes to using thermal electricity generators.
CAN THE SPEED OFFERED BY THE IA AFFECT THE QUALITY OF MILITARY JUDGEMENT?
On the contrary, I think that AI, by taking on ancillary tasks that represent a significant cognitive load, will enable soldiers to take a step back from events. They will be able to benefit from rapid and reliable data analysis or alert systems that will enable them to remain effective during periods of reduced concentration. On the other hand, they must be careful to maintain a critical mind so as not to fall into the trap of intellectual laziness, which would lead to the automatic validation of the result produced by the algorithm.
HOW CAN WE PRESERVE HUMAN ANALYSIS AND THE FINAL CHOICE IN COMPLEX CONTEXTS?
While the contribution of AI is undeniable, human analysis will remain indispensable, particularly in the planning phase of the operation, which is generally less urgent than the conduct phase. Some data is difficult to model (the state of mind of the military leader facing us, the adversary's military culture, the state of the adversary's society and the country's external support, etc.). There are certain processes that require social science analysis - which is more difficult to model precisely - and creativity that is difficult to equate.
At the tactical level, the use of AI will be more widespread and it will be imperative to train regularly with these new tools to gradually learn to master them. Training, through simulations and war games, will certainly help us to do this by confronting officers with non-compliant situations that will help them to identify the limits of these new tools.
THE CRUCIAL QUESTION OF RESPONSIBILITY
IN THE EVENT OF A SERIOUS ERROR LINKED TO AN IA SYSTEM RECOMMENDATION, HOW IS THE CHAIN OF RESPONSIBILITY IMPLEMENTED IN THE ARMY? TO WHAT EXTENT DOES IA COMPLICATE THE DISTRIBUTION OF RESPONSIBILITIES BETWEEN THE DIFFERENT LEVELS OF COMMAND?
The military leader will always remain responsible for his actions, with or without the use of AI. It is for this reason that the use of AI must be framed upstream, so that questions no longer need to be asked in the heat of the moment. Furthermore, AI must not complicate the responsibilities between the different levels of command, insofar as at each of these levels there is a leader who will retain full responsibility for his or her decision. Although humans will be less involved in the loop so as not to hamper the effectiveness of certain weapons (self-defence systems, swarms of drones or robotic units, etc.), they will always supervise this loop and remain responsible for the missions assigned to these systems.
For its proper use, the question of trusted AI arises. It is only when AI is fully "accepted" (i.e. has proved its operational effectiveness) by its military users that it will produce all its effects within a structured command system.
HOW CAN THE AUTONOMY OF LOWER LEVELS BE PRESERVED IN THE FACE OF THE RISK OF RECENTRALISATION BROUGHT ABOUT BY THE IA?
I do not believe that AI will be an aggravating factor in the centralisation of command and the reduction of autonomy. Chaos and lethality will remain strong levers for taking responsibility at all levels. On the contrary, the need to speed up decision-making (strikes beyond direct sight or in depth, neutralisation of an enemy attack, etc.) will require every level to be able to seize opportunities, especially if connectivity is not fully guaranteed.
STRATEGIC PERSPECTIVES AND SOVEREIGNTY
CAN THE IA BECOME A KEY FACTOR IN THE STRATEGIC BALANCE BETWEEN MILITARY POWERS? CAN IT COMPENSATE FOR CAPABILITY WEAKNESSES? IF SO, HOW SHOULD CAPABILITIES BE ADJUSTED IN TERMS OF SKILLS TO SUPPORT THIS TECHNOLOGICAL SUPERIORITY?
AI coupled with connectivity and robotisation will undoubtedly be a key factor in the strategic balance between military powers. The combination of these three technologies will give a major advantage to the military power that masters it and uses it with confidence - in addition to mechanised manoeuvre - to win on the battlefield.
AI will enable us to optimise the resources at our disposal by getting the best out of them. However, it will not work miracles to overcome our capability shortcomings. If we don't have the right sensors or the right effectors in sufficient numbers, we will be outclassed by our adversary's mass.
What's more, if AI is the tool of the Kill Web (in French: le réseau de destruction) and therefore of rapid and effective fire manoeuvre, it is also a tool for returning to manoeuvre options, i.e. combining mobility and fire to create tactical opportunities.
As well as industrialising firefighting, AI is a tool that gives us time to become agile again and combine firefighting in depth and manoeuvring opportunities.
ARE FRENCH AND EUROPEAN MILITARY IA INITIATIVES UP TO THE CHALLENGE OF MORE TECHNOLOGICALLY ADVANCED POWERS?
I don't think we have anything to be ashamed of when it comes to AI, because France has innovative companies and skilled engineers in the field.
In addition, the creation of the Ministerial Agency for Artificial Intelligence in Defence (AMIAD) on 1 January 2008 will provide a new framework for the development of artificial intelligence in defence.er May 2024 and the investment in significant computing capacity (ASGARD), embody France's ambition to become a strategic player in this field in Europe. AMIAD (which provides a link between research, technological innovation and operational requirements) was set up to organise the scaling-up of AI, enabling France to give itself the resources it needs to achieve its ambitions in defence AI and thus keep pace with the technological advances of other powers.
The imperative of ethical subjects
WHAT DO YOU SEE AS THE MAIN CHALLENGE AND OPPORTUNITY THAT IA REPRESENTS FOR MILITARY COMMAND AND RULES OF ENGAGEMENT IN THE YEARS AHEAD?
The big challenge is data structuring and protection, because data is the fuel of AI. If data is compartmentalised, it cannot be used to best advantage. The second challenge lies in the energy consumption required by algorithms to process mass data.
Storage and energy are the two major challenges that require the terrestrial environment to organise data and algorithms frugally.
Ethical issues are natively taken into account and constitute an imperative. There will always be human intervention to assess compliance with the rules of engagement, and war, an inherently human activity, will always be a confrontation of wills before being a technical issue.
While the great promise of AI is automation and the refocusing of human resources on tasks with higher added value, it is also a source of robotic mass to saturate opposing systems. For a Western army whose demographics are at half-mast and which will have to engage in an increasingly deadly battlefield, it is also a source of robotic mass to saturate opposing systems.