“Beyond the Target: Governing AI-Enabled Drones in the Ukraine-Russia War”
Written by Sharon M. Lazich
March 2026
Executive Summary
The kinetic nature of the Ukraine–Russia war has accelerated AI-enabled drone evolution by driving fast-paced, bottom-up production that shortens innovation cycles (Smith, 2024) and lowers barriers to entry at low costs (Bendett & Kirichenko, 2025; Nieczypor & Matuszak, 2025). While Ukraine has leveraged these technologies to stave off a full Russian invasion, the conflict has catalyzed development of semi-autonomous and autonomous systems (Russell, 2023), specifically automatic target recognition (ATR). As these technologies–and the data used to train them–expand, so too do the risks of civilian harm, necessitating governance systems to expand in parallel. Addressing these risks will require a combination of private and public sector engagement (Amodei, 2026) and the adoption of a risk management framework (RMF) that can define technical functionality, clarify accountability thresholds, and manage identified risks throughout the drone lifecycle. The enduring challenge for Ukraine, a veritable canary in the coal mine, is balancing the need for performance risk management while maintaining tactical advantage against a larger, more equipped adversary unconstrained by social or international norms governing the use of weapons in war (Abdurasulov, 2025).
Tool Analysis
Ukraine has successfully utilized ATR-enabled drone technology for both force protection and to execute attacks on Russian military targets (Bondar, 2025). At its most basic, ATR is a suite of technologies that utilize sensory data to detect, classify, and track enemy targets using images gathered from various onboard sensors (Li et al., 2014; Verly, 1989). Inputs can include geographical maps, high- or low-resolution imagery, infrared signatures, radar data, GPS data, target locations, and target types, and output consists of a list(s) of potential targets (e.g., a vehicle or a tank; a person or a group of people; or a structure [Verly, 1989]), with a confidence metric to assess the likelihood of positive identification. Subsequently, a human operator or an autonomous system can validate classification and decide the appropriate action (Li et al., 2014).
ATR has concrete business applications across many sectors, beyond those used during active conflict—like surveillance, security, and medicine (Li et al., 2014; United Nations Secretary General, 2024, pg. 5)—however, it is not without limitations, including misclassification of a target object; accountability failures resulting from lack of human oversight; and data quality concerns brought on by data poisoning (Rossiter, 2021)inclement weather, poor camera positioning, sensor limitations, or changes in terrain (Li et al., 2014; Verly, 1989). Because these limitations carry the potential for civilian harm, their deployment should be accompanied by a structured risk assessment framework to identify, monitor, and mitigate associated risks (International Committee of the Red Cross [ICRC], 2019).
Investment Reasoning
The kinetic nature of the Ukraine–Russia war increases the need for a private-public RMF to address legal, ethical, and reputational considerations of using ATR technology during an active conflict. From a legal approach, an RMF can help align ATR tool deployment to existing international humanitarian laws, like the Geneva Conventions (1949) and the Article 36 weapons review obligation under Additional Protocol I (1977), which requires that any new weapon should be assessed prior to its use to ensure compliance with international humanitarian law. From an ethical perspective, an RMF supports responsible ATR-enabled drone deployment by identifying potentially harmful capabilities before they’re operationalized, creating time to correct the model. From a reputational perspective, Ukraine, bound to the norms customary for United Nations statehood and possibly as a future member of the European Union, should take every effort to abide by international laws that seek to protect civilians, even and especially when its opponent doesn’t (Abdurasulov, 2025; Office of the United Nations High Commissioner for Human Rights, 2025). Not only will an RMF help safeguard civilian lives, it will also provide self-correcting mechanisms to improve drone performance and reliability. Additionally, private sector actors that demonstrate a willingness to help states manage the legal, ethical, and reputational risks associated with these technologies may benefit from enhanced credibility, stronger partnerships, and sustained market opportunities (Madanchian & Taherdoost, 2025).
Proposed AI Lifecycle
All stages of the ATR-drone lifecycle necessitate the use of a RMF; however the one that would have the most impact for protection against civilian harm would be T40. AI system harms and impacts pre-assessment, during the “Planning and Design” stage. This is used when an AI system is determined to make “irreversible decisions that affect the rights and obligations of individuals…and may generate significant harms” (University of Turku, 2025). Of all the potential dangers posed by semi- or autonomous ATR-enabled drones, the risk to civilian life represents the most significant threat to Ukraine’s legal, ethical, and reputational credibility and should therefore be assessed and documented at the earliest possible stage of development. This can and should be performed by the AI system owner, reviewed by a third-party evaluator, and tested by the Government of Ukraine prior to its deployment, with monitoring during and after deployment to ensure system reliability.
AI Impact Assessment
The proposed RMF will be a hybrid of the National Institutes of Standards and Technology (NIST) AI Framework and the Government of Canada’s Algorithmic Impact Assessment (AIA) tool, and will include the following sections:
Description of the drone’s technical capabilities
Explanation of the decision-making algorithms that enable the drone to detect and classify targets, along with a brief description of the training dataset
Identification of accountability and decision-making structures, including human oversight protocols
Assessment of the degree to which the ATR drone could cause civilian harm
Evaluation of the system’s lifecycle and potential for misuse if captured
The proposed RMF would draw on NIST’s “AI Risks and Trustworthiness” section, to ensure ATR systems adhere to principles of reliability, safety, security, accountability and transparency (NIST, 2023). The output would draw from Canada’s AIA tool, with a scaled risk score (Little to No Impact to Very High Impact [(Government of Canada, 2026]) that will flag the need for further investigation by the Government of Ukraine before putting it into use. While there are challenges associated with an RMF–availability of reliable metrics (NIST, 2023), the framework will provide a structured approach to analyze and hopefully mitigate the risks associated with ATR-enabled drone use. While the onus will be on the AI developer to fill out the RMF, risk tolerance and decisions to operationalize ATR-enabled drones sit with the Government of Ukraine.
Primary actors involved in the performance of the RMF include (due to the length limitations, this list is not exhaustive):
Government of Ukraine/Military Commanders (End User): Sets the standards for RMF protocols and is the ultimate decision-maker with respect to putting an ATR drone into operation.
AI System Owner/AI Developer: Responsible for conducting the RMF and filing the report for their system.
Third-party Monitor: Provides external review and validation of the RMF.
External Legal and Industry Standards
Any government deploying ATR-enabled drones should adhere to international legal standards, particularly when civilian lives are at risk. As a country seeking EU membership, Ukraine’s use of ATR-enabled drones should be guided by international humanitarian law, including the Geneva Conventions (1949), as well as Articles 35 and 36 of Additional Protocol I (1977), and the NATO Policy for the Protection of Civilians (2016), which require states to assess new weapons and methods of warfare to ensure they do not cause unlawful harm to civilians. Together, these represent an important legal foundation for assessing the risks of ATR-enabled drone deployment during conflict and for mitigating harm to civilian populations.
Stakeholder Engagement Strategy
Due to the pressures of the war, the urgency to develop and operationalize force protection tools and technologies compresses the time available to perform a rigorous RMF. Nonetheless, the Government of Ukraine requires engagement with stakeholders to assess whether deployed drone technologies adequately safeguard the aforementioned legal, ethical, and reputational considerations (Madanchian & Taherdoost, 2025). Beyond the AI system owners and developers, and third-party monitors, the Government of Ukraine should engage civil society and human rights organizations, as well as academics, military practitioners, technologists, and advocacy groups to participate in the RMF tool’s development (Rossiter, 2021).
Conclusion
Ukraine faces a seemingly impossible task of having to balance the need for operational effectiveness while also complying with international humanitarian law, EU, and NATO standards. Still, RMFs benefit the Government of Ukraine and private sector drone manufacturers in that they provide a framework for self-correction at the development stage, which can improve model performance and reduce potentially harmful operational errors (Madanchian & Taherdoost, 2025; NIST, 2023), specifically where civilian lives are concerned. Additionally, RMFs reinforce the importance of adhering to legal, ethical, and reputational standards, and adherence to international humanitarian law during an active conflict (ICRC, 2019; NATO, 2016).
References
Abdurasulov, A. (2025). The new AI arms race changing the war in Ukraine. BBC News. https://www.bbc.com/news/articles/cly7jrez2jno
Amodei, D. (2026). The adolescence of technology. https://www.darioamodei.com/essay/the-adolescence-of-technology
Bendett, S., & Kirichenko, D. (2025). Battlefield drones and the accelerating autonomous arms race in Ukraine. Modern War Institute at West Point. https://mwi.westpoint.edu/battlefield-drones-and-the-accelerating-autonomous-arms-race-in-ukraine/
Bondar, K. (2025). Ukraine’s future vision and current capabilities for waging AI-enabled autonomous warfare. Center for Strategic and International Studies. https://www.csis.org/analysis/ukraines-future-vision-and-current-capabilities-waging-ai-enabled-autonomous-warfare
European Parliament & Council of the European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union. https://eur-lex.europa.eu/eli/reg/2024/1689/oj
Government of Canada. (2026). Algorithmic impact assessment tool. https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-ai/algorithmic-impact-assessment
International Committee of the Red Cross. (1949). Geneva Convention relative to the protection of civilian persons in time of war (Fourth Geneva Convention). https://ihl-databases.icrc.org/en/ihl-treaties/gciv-1949
International Committee of the Red Cross. (2019). Decisions, decisions, decisions: Computation and artificial intelligence in military decision-making. ICRC https://www.icrc.org/en/publication/4293-decisions-decisions-decisions-computation-and-artificial-intelligence-military
Li, Y., Li, X., Wang, H., Chen, Y., Zhuang, Z., Cheng, Y., Deng, B., Wang, L., Zeng, Y., & Gao, L. (2014). A compact methodology to understand, evaluate, and predict the performance of automatic target recognition. Sensors, 14(7), 11308–11350. https://doi.org/10.3390/s140711308
Madanchian, M., & Taherdoost, H. (2025). Ethical theories, governance models, and strategic frameworks for responsible AI adoption and organizational success. Frontiers in Artificial Intelligence. https://doi.org/10.3389/frai.2025.1619029
Nieczypor, K., & Matuszak, S. (2025). Game of drones: The production and use of Ukrainian battlefield unmanned aerial vehicles. OSW Centre for Eastern Studies. https://www.osw.waw.pl/en/publikacje/osw-commentary/2025-10-14/game-drones-production-and-use-ukrainian-battlefield-unmanned#_ftn1
National Institute of Standards and Technology. (2023). Artificial intelligence risk management framework (AI RMF 1.0). NIST. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf
Office of the United Nations High Commissioner for Human Rights. (2025). Russian authorities committed crimes against humanity targeting civilian population through drone attacks, UN Commission of Inquiry finds. https://www.ohchr.org/en/press-releases/2025/10/russian-authorities-committed-crimes-against-humanity-targeting-civilian
Russell, S. (2023). AI weapons: Russia’s war in Ukraine shows why the world must enact a ban. Nature, 614(7949), 620–623. https://doi.org/10.1038/d41586-023-00511-5
Rossiter, A. (2021). AI-enabled remote warfare: Sustaining the Western warfare paradigm? International Politics, 58(4), 641–659. https://doi.org/10.1057/s41311-020-00271-5
United Nations Secretary-General. (2024). Lethal autonomous weapons systems (UN Doc. A/79/88). United Nations. https://docs.un.org/en/A/79/88
University of Turku. (2025). The AI Governance Lifecycle - AI Governance Framework. AI Governance Framework. https://ai-governance.eu/ai-governance-framework/the-ai-governance-lifecycle/
Verly, J. G., Delanoy, R. L., & Dudgeon, D. E. (1989). Machine intelligence technology for automatic target recognition. The Lincoln Laboratory Journal, 2(2), 277–310. https://archive.ll.mit.edu/publications/journal/pdf/vol02_no2/2.2.8.machineintelligence.pdf
Written as part of graduate coursework in AI Management & Policy, Purdue University.