In a recent study published in the journal NPJ Digital Medicine, The researchers reviewed current guidelines for the ethical application of Artificial Intelligence (AI) in military and healthcare applications. Their discussions focus on “generative artificial intelligence”, a new technology that aims to efficiently produce information, and tries to overcome the current restrictions on the ethical use of technology. They develop and propose a new system for the ethical application of artificial intelligence in military and clinical research, called the “GREAT CALL”.
Image credit: CHIEW / Shutterstock
Artificial intelligence and the need for ethics
Artificial Intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. With the increasing computing power of modern hardware and the widespread adoption of smart devices, access and adoption of artificial intelligence is higher than ever. Artificial intelligence has infiltrated and revolutionized nearly every aspect of modern human society, with machine learning tools widely used in online advertising, scientific research, and military simulations.
The military sector is particularly benefiting from AI tools, with the ongoing conflicts between Russia and Ukraine expected to pave the way for autonomous AI-guided weapons systems. Unfortunately, the development of AI tools is rarely accompanied by an ethical evaluation of the system, with collateral damage to non-combatants or friendly forces going unnoticed.
“Seeing the rapid emergence of artificial intelligence and its applications in the military, the United States Department of Defense (DOD) unveiled the ethical principles for artificial intelligence in 2020.”
These principles include five critical aspects of AI-centered ethics: responsibility, fair treatment, traceability, trustworthiness, and governance. The North Atlantic Treaty Organization (NATO) adopted these principles and expanded them to include legitimacy, explanation and mitigation of bias. These ethical introductions highlight the attention prominent military agencies are placing on AI tools and the steps they are taking to mitigate AI’s unwanted effects.
Similar to the military significance of AI tools, these technologies share similar advantages and cautions in the medical field. Applications of artificial intelligence in medicine typically involve using these technologies to help clinicians diagnose diseases and recommend treatments. However, in some cases, AI has begun to replace entire departments of formerly human staff.
The figure illustrates the commonalities and differences in ethical principles between the military and health care. In our estimation, traceability, reliability, legality, accountability, governance and equity are the common ethical principles that both sectors have. At the same time, ethical principles such as empathy and privacy are emphasized in health care, while ethical principles such as national security and defense are emphasized in the military.
Generative AI (GenAI) is a new type of artificial intelligence (AI) designed to produce new content. It learns patterns from existing data and uses that knowledge to generate new results. Despite proving extremely useful in drug discovery, evidence-based drug summarization, and equipment design, these models are designed to optimize desired “black and white” outcomes without concern for potential (“grey”) collateral damage. This requires ethical constraints on the effects of artificial intelligence, which, unfortunately, remain under debate. Furthermore, the malicious use of AI-developed technologies remains rarely examined, much less discussed.
Recently, the World Health Organization (WHO) released a paper on ethical considerations in the medical adoption of AI. However, this paper remains in its infancy with much work, both research and debate, pending before trust in AI can sufficiently enable it to take over formerly human roles.
We propose the “GREAT PLEA” ethical principles for genetic AI in healthcare, specifically Goverallocation, Rflexibility, mquality, ONEaccountability, Tcompetitiveness, Pirivalry, largehorrible, mpatience, and ONEautonomy. The GREAT PLEA ethical principles demonstrate our great appeal to the community to prioritize these ethical principles when implementing and using genetic AI in practical healthcare settings.
The current scenario
Since ethical debates in medical AI remain contentious and rarely addressed, most research on bias reduction and algorithm optimizations focuses on the military realm. Studies and reports, mainly by the RAND Corporation, aimed at unifying legal obligations (such as the Geneva Conventions and the Law of Armed Conflict [LOAC]) with public opinion on the development of critical guidelines for the ethical development and use of artificial intelligence algorithms, particularly genetic artificial intelligence.
Ongoing military research suggests that while AI can be used for most military applications, AI is not currently advanced enough to accurately distinguish between combatants and civilians, requiring Human-In-The-Loop (HITL) prior to activation of lethal weapon systems. . At the same time, the aforementioned principles articulated by NATO and the US State Department have enabled the development of state-of-the-art AI algorithms with ethical considerations built into their code (eg recommended action).
“The success of these ethics principles has also been demonstrated through their ability to adopt and integrate artificial intelligence carefully, taking into account the potential dangers of artificial intelligence, which the Pentagon is determined to avoid.”
Public opinion in the US appears to be mixed, with some favoring the autonomous military decision-making of AI systems, while others are uncomfortable with the idea of AI initiating warfare without human permission.
“These results could be due to a perceived lack of accountability, which is seen as something that could completely negate the value of AI, as a fully autonomous system making its own decisions removes military operators or clinicians from responsibility for the actions of the system”.
THE BIG CALL for artificial intelligence in clinical settings
In this paper, the authors develop and propose a new framework for the ethical use of genetic artificial intelligence in medical settings. Dubbed the “GREAT CALL,” the system includes and is based on ethical documents published by the US State Department, the American Medical Association (AMA), the WHO, and the Coalition for AI Health (CHAI). NATO authorities have been largely blocked due to their over-reliance on AI adherence to International Military Law, which does not apply to the clinical setting. In all other respects, NATO’s recommendations are almost identical to those set forth by US State Department authorities.
GREAT PLEA is an acronym for Governability, Reliability, Equity, Accountability and Traceability, which are common across the military and medical fields. Privacy, legality, empathy, and autonomy are concerns highlighted across the health care spectrum and constitute a new set of concerns hitherto ignored in the medical literature.
Since genetic AI requires guidelines that consider misinformation, potential data bias, and generalized evaluation metrics, these nine principles form the basis for future AI systems to exist and function with minimal human intervention.
These principles can be enforced through cooperation with legislators and the establishment of standards for developers and users, as well as cooperation with recognized bodies in the field of health care, such as the WHO or the AMA.
Journal Reference:
- Oniani, D., Hilsman, J., Peng, Y., Poropatch, RK, Pamplin, JC, Legault, GL, & Wang, Y. (2023). Adoption and extension of ethical principles for genetic artificial intelligence from the military to health care. Npj Digital Medicine6(1), 1-10, DOI – https://doi.org/10.1038/s41746-023-00965-x, https://www.nature.com/articles/s41746-023-00965-x