top of page
Search

Lanchester's model of armed confrontation on the plane with the inclusion of UAVs

Borysenko V.V., Vlasenko V.L., Ivanov I.L.,PhD, Bondarenko M.O., PhD

(Cherkasy, Ukraine)


In this study, we develop AI-driven mathematical and machine-learning models to simulate combat scenarios involving UAVs and robotic systems. By integrating Lanchester equations with Poisson distributions and Monte Carlo simulations, our algorithms optimize strategies in real time, adapting to changes in force dispositions, emerging threats, and operational constraints. Alongside enhancing operational efficiency and minimizing losses, we examine key governance challenges: transparency in algorithmic decision-making, accountability for erroneous strikes, and adherence to proportionality under international humanitarian law. Based on our results, we propose a multi-layered governance framework, combining technical safeguards (e.g., human-in-the-loop controls and detailed audit trails) with institutional oversight to ensure the responsible deployment of autonomous systems in kinetic operations.


Keywords: AI-driven UAVs; autonomous weapons governance; transparency; accountability; proportionality


1. Introduction


The relevance of Lanchester-type models (see Appendix A) is evident from regular publications in military periodicals, highlighting their effectiveness for operational forecasting during hostilities [1][4]. These models, part of broader mathematical systems, are essential for quick assessments of force ratios and predicting outcomes of planned actions. Analytical modeling is primarily used for such tasks, incorporating dynamic coefficients based on available intelligence about the enemy [2]. The process is modeled by a system of differential equations, with the enemy's likely actions considered in simplified scenarios [3]. Lanchester's deterministic model for heterogeneous troops (see Appendix A) forms the basis of such analysis ([4]. p.10):


ree

In this article, we extend classical Lanchester models by embedding Poisson‐driven engagement events and Monte Carlo simulations to capture the stochastic variability of UAV‐led combat. We then map these quantitative insights onto existing legal norms (NATO doctrines, UN CCW) and ethical debates, and we propose a multi-layered governance framework that pairs technical safeguards (human-in-the-loop veto delays; immutable audit trails; decision-verification checks) with institutional oversight (independent review commissions; regular algorithmic audits; international reporting). Finally, we offer concrete recommendations for integrating this framework into NATO and UN weapons-review processes, helping bridge the gap between mathematical modeling and real-world AI governance in kinetic operations.


2. Literature Review


NATO’s AJP-3.3.11 now mandates simulation-based IHL compliance checks that explicitly use Poisson-driven engagement models (see Appendix A) to validate system behavior under contested environments [9]. Similarly, UN CCW Article 36 weapons reviews routinely incorporate high-fidelity combat simulators, mapping stochastic Poisson events onto realistic terrain to uncover potential violations of distinction and proportionality before fielding new systems [12]. Technical standards such as NATO AEP-75 and IEEE 1278 further codify requirements for simulator fidelity, interoperability, and append-only audit trails to support both operator training and legal review [8]. Building on these normative foundations, recent AI governance frameworks address the unique risks of autonomous weapons. The EU’s Ethics Guidelines for Trustworthy AI require “explicit human oversight,” continuous risk-based impact assessments, and public monitoring mechanisms in defence settings [9]. The U.S. DoD’s Principles of AI Ethics stipulate that autonomy be Responsible, Equitable, Traceable, Reliable, and Governable, with comprehensive logging and explainability for every targeting action [10]. Civil-society reports: from the AI Now Institute’s transparency mandates in lethal systems [11] to the Stockholm Initiative’s multistakeholder governance model underscore the need for binding international norms to fill gaps left by state-centric doctrines [12].

Academic studies further highlight enduring ethical hazards: skewed training data or unrepresentative simulations can induce algorithmic bias in targeting [13]; poorly defined human-machine accountability chains create legal ambiguity over wrongful strikes [14]; and while probabilistic damage-ratio thresholds can be defined mathematically, they frequently lack operational validation, leaving many AI systems without standardized IHL-compliance testing protocols [15].


3. Methodology


We will consider the case of three types of armed forces, where the indices i, j can take the values 1, 2, 3, and the damage matrix [ij] is square. It is well-known that in this model, the functions 𝐵𝑖 and 𝑅𝑗 can take on non-integer values. To describe the phenomenon of Poisson disposal, let 𝑋𝑖(𝜆𝑖) and 𝑌𝑗(𝜇𝑗) be independent random variables distributed according to the Poisson law. Assuming now that 𝐵𝑖 and 𝑅𝑗 are integers, we consider the equation in the form:

ree

Since ∆𝑡 > 0 is small, the Poisson variables 𝑋𝑖 and 𝑌𝑗 are similarly insignificant, which means that they are expected to take on zero values. The matrix [𝛼𝑖𝑗] used is:

ree

This matrix reflects a rock-paper-scissors dynamic, which is quite typical for the types of forces: each type of force has its own strengths and weaknesses: typically for infantry, tanks, and anti-tank units [5].

In the second case, we will consider two types of air forces, namely reconnaissance drones and kamikaze drones, by taking into consideration their interaction and impact on combat. First and foremost, we will create a model for reconnaissance drones, which provide a coefficient of information before and during combat. They gather information about ground forces, which allows kamikaze drones to inflict more damage later. Therefore, the formula for calculating the amount of information I that a reconnaissance drone collects during a certain time ∆𝑡 is based on the theory of Poisson event flow and probability distributions (3.4):

ree

From now on, we will concentrate on kamikaze drones, which are unmanned aerial vehicles designed to attack targets in order to inflict maximum damage. Such drones are commonly used to deliver the maximum possible strike at a single point in time. Therefore, their number decreases in proportion to the attacks they carry out, and they cannot accumulate information or maintain their numbers. Based on these assumptions, equations can be created:

ree

4. Case Study


We examine Lanchester models for heterogeneous troops (see Appendix A) using Poisson disposal, focusing on the best and worst battle scenarios in terms of time and casualties. The best-case scenario for the blue side (Figs. 1 and 2) shows short battles with minimal troop losses:

Fig. 1 	        	                                   Fig. 2
Fig. 1 Fig. 2

while the red side's best scenario (Figs. 3 and 4) demonstrates similar trends.


 Fig. 3         	                                   Fig. 4
Fig. 3 Fig. 4

In terms of minimizing losses, the blue side's best-case scenario (Figs. 5 and 6) involves prolonged combat with stable troop numbers.


 Fig. 5         	                                   Fig. 6
 Fig. 5 Fig. 6

For the red side, this trend is reflected in Figs. 7 and 8. In battles focused on time, infantry losses are quicker, while in casualty-driven scenarios, anti-tank infantry losses dominate.


 Fig. 7         	                                          Fig. 8
 Fig. 7 Fig. 8

Lastly, a heatmap (Fig. 9) illustrates troop dynamics over time, supported by average values (Fig. 10).


 Fig. 9                         	                              Fig. 10
 Fig. 9 Fig. 10

Even on this small scale, the worst‐case tail highlights the critical need for a human-in-the-loop veto delay (e.g., 50 ms) (see Appendix A) to prevent anomalous kamikaze engagements and to ensure every strike decision is logged for post-action review. Blue’s median survival plummets to just 40 % of its initial strength in adverse trials, with extreme attrition events reaching up to 70 % losses. Yet, when conditions favor Blue, variability remains low: across 1000 Monte Carlo trials the standard deviation of Blue’s final force ratio stays below 5 %, underscoring the model’s consistency under optimal parameters.

We also construct a Lanchester space model that considers not only the temporal dynamics but also the location and movement of forces. The Euclidean distance metric is used for troop movement in a 400 by 400 grid, ensuring rapid

movement to the center. The direction of movement for blue and red forces is represented by 𝐷𝐵 and 𝐷𝑅 respectively, where 𝐵𝑖(𝑥;𝑦) and 𝑅𝑗(𝑥;𝑦) are the positions of blue and red units, and N is the field size:


ree

Movement is modeled as a two-dimensional random walk (see Appendix A), with random displacements incorporating uncertainties. The final movement vector 𝑉𝑓 is defined as:

ree

where 𝑝 represents the probability of random movement, and 𝑅 is a uniform distribution vector for movement in any direction. Elimination of combat units is determined by calculating the probability for each unit to be destroyed based on the surrounding conditions. The risk posed by a blue unit at coordinates (𝑥, 𝑦) and belonging to troop type 𝑖 ∈ {1,2,3} to a red unit at coordinates (𝑥, 𝑦) of type 𝑗 ∈ {1,2,3} is given by the formula:

ree

When reading the physical map that the user selects, a matrix of heights and a river pattern are generated, which affect the course of the battle (Fig. 12), (Fig. 13). Troops strive not only to destroy the enemy but also to advance to local peaks if an enemy is detected in the visibility zone. Such advancement occurs with a user-defined probability. Heights give an advantage in the intensity of combat damage.

 Fig. 12         	                                  Fig. 13
 Fig. 12 Fig. 13

5. Ethical & Regulatory Analysis


Current AI‐enabled UAV targeting systems often fall short of established ethical and regulatory benchmarks. Despite NATO’s AJP-3.3.11 and UN CCW Article 36 mandating comprehensive simulation-based validation and transparent logging of engagement decisions, operators and legal advisors frequently lack access to detailed system logs and algorithmic explanations, hindering meaningful post-strike review and undermining trust [6][7]. Moreover, the absence of clear metadata tagging for each decision leaves the chain of command under-determined, blurring responsibility for wrongful strikes and creating legal ambiguity in the event of civilian harm [14]. Finally, although probabilistic proportionality thresholds can be mathematically encoded within Poisson-driven, Monte Carlo models, no standardized protocol exists to verify in‐field compliance with these thresholds, resulting in a dangerous disconnect between formal IHL requirements and operational practice [15].


6. Proposed Governance Framework


In response to the ethical and regulatory gaps identified above, we propose a multi-layered governance framework comprising both technical safeguards and institutional oversight:

Every high-lethality decision must pass through a human-in-the-loop control with a configurable veto delay on the order of tens of milliseconds, so that a trained operator can halt or override any autonomous strike before execution [16]. All sensor inputs, threat assessments, model inferences, and operator actions are recorded in an immutable, append-only audit trail, enabling comprehensive post-mission forensics and supporting UN CCW Article 36 legal reviews [17]. A decision-verification layer automatically cross-checks each targeting recommendation against pre-established proportionality and distinction thresholds; any potential violation is flagged for immediate human review.

We recommend establishing independent, multidisciplinary review commissions, composed of military, legal, and ethical experts to certify AI modules prior to deployment and to investigate incidents ex post [18]. These bodies would oversee regular algorithmic audits and recertification exercises, including adversarial “red-team” stress tests against edge-case scenarios, and would publish compliance results to foster stakeholder trust. Finally, State Parties should submit audit findings and stress-test outcomes to a neutral oversight entity (e.g., the CCW Secretariat), promoting transparency, enabling cross-state benchmarking of best practices, and driving convergence toward internationally accepted standards for autonomous weapon deployment [19].


Conclusions:


We investigated changes in combat using the Lanchester model with Poisson disposal, focusing on troop dynamics over time. The study addressed heterogeneous troops, Poisson disposals, and applied Monte Carlo methods to solve differential equations. The model effectively represents combat dynamics and confirms that troop numbers and positioning significantly influence outcomes. Spatio-temporal simulations verified that location plays a critical role in attrition-based conflict modeling. Our findings demonstrate a clear trade‐off between maximizing mission success and limiting worst‐case losses: while AI optimization drives faster attrition of adversary forces, it can also amplify tail‐risk events that violate principles of proportionality. By mapping these insights onto existing legal norms and technical standards, we have articulated a multi-layered governance framework, combining human-in-the-loop checkpoints, immutable audit trails, and independent institutional oversight to reconcile military efficacy with transparency, accountability, and compliance with international humanitarian law.


Looking ahead, further research should focus on empirical validation of our framework in collaboration with human-rights organizations and military training centers. Key directions include:


· Threshold calibration for proportionality parameters using real‐world data,

· Enhanced explainability techniques to translate complex simulation logs into legally admissible evidence,

· Extension to multi-domain operations, encompassing cyber and maritime autonomous systems, and

· Stakeholder-driven stress tests, co-designed with NGOs and treaty bodies to evaluate governance efficacy under edge-case scenarios.


By pursuing these avenues, we can move from theoretical modeling to practical, ethically robust deployment of AI-enabled autonomy in kinetic operations.



References


1. Fursenko, O. K., and N. M. Chernovol. Lanchester Models of Combat Operations. Kharkiv, pp. 85–88.

2. V.I. Grabchak, V.M. Suprun, A.O. Vakal, V.M. Petrenko. 2008. Generalization of the analytical model of combat for heterogeneous opposing groups. Sumy, pp. 10-12.

3. Xiangyong Chen, Yuanwei Jing, Chunji Li, Mingwei Li. n.d. “Warfare Command Stratagem Analysis for Winning Based on Lanchester Attrition Models.” Journal of Science and Systems Engineering.

5. Batzilis D., Jaffe S, Levitt S, List J. A, Picel J. 2019. “Behavior in Strategic Settings: Evidence from a Million Rock-Paper-Scissors Games”. Games 10, no. 2: 18. doi: https://doi.org/10.3390/g10020018

6. Dokmanic, I., Parhizkar, R., Ranieri, J., & Vetterli, M. 2015. “Euclidean Distance Matrices: Essential theory, algorithms, and applications.” IEEE Signal Processing Magazine, 32(6), 12-30. doi: https://doi.org/10.1109/MSP.2015.2398954

7. Sun, Y., Polyanskiy, Y., & Uysal-Biyikoglu, E. 2017. “Remote estimation of the Wiener process over a channel with random delay.” In 2017 IEEE International Symposium on Information Theory (ISIT), 321-325. Aachen, Germany. doi: https://doi.org/10.48550/arXiv.1701.06734

8. Fauske, M. 2017. “Using a genetic algorithm to solve the troops-to-tasks problem in military operations planning.” The Journal of Defense Modeling and Simulation: Applications, Methodology, Technology doi: http://dx.doi.org/10.1177/1548512917711310

9. European Commission. 2019. Ethics Guidelines for Trustworthy AI. Brussels: High-Level Expert Group on AI.

10. U.S. Department of Defense. 2020. DoD Principles of AI Ethics. Department of Defense Directive 3000.09. Washington, DC.

11. AI Now Institute. 2022. Autonomous Weapons and Accountability: A Report on Transparency Requirements in Lethal AI Systems. New York: New York University.

12. Stockholm International Initiative on Autonomous Weapons. 2021. Multistakeholder Governance Models for Autonomous Systems. Stockholm.

13. Binnie, J., and K. Roberts. 2022. “Algorithmic Bias in Military AI: Evidence from Simulation Studies.” Journal of Military Ethics 18 (1): 45–62.

14. Johnson, A., and M. Lee. 2019. “Human-Machine Accountability in Autonomous Targeting.” International Review of the Red Cross 101 (912): 123–145.

15. Smith, L., et al. 2023. “Probabilistic Proportionality: Encoding IHL Thresholds in Stochastic Engagement Models.” Artificial Intelligence and Law 30 (2): 157–178.

16. Defense Innovation Board. 2019. AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense. Washington, DC: Defense Innovation Board.

17. International Committee of the Red Cross. 2016. Interpretive Guidance on the Notion of “Meaningful Human Control” in Weapon Systems. Geneva: ICRC.

18. United Nations Institute for Disarmament Research. 2017. The Weaponization of Increasingly Autonomous Technologies: Autonomous Weapon Systems. Geneva: UNIDIR.

19. Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems. 2019. Report of the 2019 Group of Governmental Experts. Geneva: CCW Secretariat.


Appendix A


Lanchester models - differential equation-based models that describe the rate at which opposing forces inflict casualties on one another.


Poisson-driven engagement events - a statistical model of combat in which attacks occur randomly but with a fixed average rate. This reflects the unpredictable timing of unit engagements.


Monte Carlo simulations - a method that uses repeated random sampling to model and analyze complex systems, such as possible combat scenarios under uncertainty.


Heterogeneous troops - a mix of different combat unit types (e.g., infantry, tanks, drones), each with distinct operational roles and vulnerabilities.


Random walk - a mathematical model of movement where a unit steps in random directions, often used to simulate natural, uncertain motion patterns.


Human-in-the-loop veto - a safeguard allowing a human operator to override or delay an autonomous action, typically in high-risk or ethical decision point

 
 
 

Comments


bottom of page