Open letter: Civil society organisations urge the Dutch government to immediately establish a moratorium on developing AI systems in the military domain

The Responsible AI in the Military Domain (REAIM 2023) conference is taking place on the 15th and 16th of February 2023 in the Hague and aims to 1) put the topic of responsible AI in the military domain higher on the political agenda, 2) mobilise and activate a broad group of stakeholders to contribute to concrete next steps and 3) foster and increase knowledge by sharing experiences, best practices and solutions. However, we fear that this conference will further normalise the militarisation of AI, which, with its increasing autonomy, will continue to harm historically marginalised communities disproportionally.

It is significant to start by recognising the acute challenges that Artificial Intelligence (AI) poses to the enjoyment of human rights due to its broad social and economic impacts. While some of those risks are shared with wider groups within society, there is abundant evidence that the discriminatory harm that AI tools can cause is unique to people from historically marginalised communities. After all, AI is built by humans and deployed in systems and institutions marked by entrenched discrimination and racism, with examples including but not limited to the criminal legal system, the housing market, and job recruitment — all relying on data that contain built-in biases and frequently baked into the outcomes the AI is asked to predict.

Specifically, within a military context, it is crucial to recognise that AI is frequently promoted in terms of its benefits whilst neglecting how its disparate harms fit into wider systems of violence, exclusion, over-surveillance, criminalisation, structural discrimination and racism. Accepting AI-enabled applications as legitimate means within the military domain would exponentially lead to the enactment of violence on populations of people, risking their right to life and dignity and making access to justice for victims even more difficult. We can also find this rhetoric within technology policy, which has shifted to a more defensive and protectionist approach — caused by the securitisation of (national) security policies after 9/11. The results are disproportionately felt among the marginalised, with common examples being the branding of Muslims as terrorists, migrant workers as criminals, and refugees as fortune seekers.

It has always been clear that war triggers certain political calculations and assumptions about which lives are worth preserving and which are not, which lives are worth mourning and which are not, and which lives are worth living and which are not. Delegating critical decisions in the context of warfare and/or security to AI-enabled military systems, which contain algorithmic biases and foster digital dehumanisation, to institutions that disproportionately mistreat individuals from marginalised communities, could only result in additional harm for these groups. Problematic new technologies are also often tested and used on marginalised communities first. We should challenge inequality structures, not embed them into military applications that kill and classify based on pre-programmed labels and identities, that cannot comprehend the value of human life, and that cannot make complex ethical choices. Even if we put aside high-risk applications such as “target” recognition and information processing, low-risk applications such as threat monitoring spark outrage and many questions. What does it take to be classified as a threat? And how will military commanders be able to independently judge the necessity and proportionality of a threat if the system flags something as a risk?

How can we ensure that the perpetuation of over-policing of certain bodies that continue to suffer from centuries of institutionalised racism, colonialism and violence are protected if there are no robust systems for human rights impact management? And most importantly, how do we trust our government if the scope of “responsible” AI in the military domain is very much based on a techno-solutionist ideology wherein only the “concrete technology” is considered? Still, no attention is paid to the broader context in which these systems are developed, systems that use violence, unequal power relations and institutional racism/ethnic profiling. Therefore, when analysing AI-enabled military systems, it is crucial to look at the broad context in which these applications are developed and would be used. That is to say, approaching these applications not only as material technologies but by understanding their wider context of power imbalances and violence. Nevertheless, the Dutch government demonstrates a clear lack of reflection as past mistakes regarding AI systems' impact on human lives have not led to centring human rights in this debate.

These technologies are not developed in a neutral context but in societies in which, for example, (predictive) policing is tied to specific communities (ethnic and racial minorities), types of crimes (white-collar vs blue-collar) and characteristics including but not limited to a lack of public scrutiny, accountability and interpretability of these systems. Other examples from within border management have proven that using AI systems to save lives is no more than false hope — as maintaining situational awareness across European, Mediterranean and North African shores resembles actively preventing “irregular immigration”, leaving asylum seekers helpless. Even when “ethical” perspectives on military AI are discussed, the debates generally do not include individuals from marginalised groups and rarely include persons from the global South - which exacerbates existing structural inequities. To date, and even during this international conference, discussions regarding the “ethics” of military AI have continued to highlight the voices of sectors ranging from defence, diplomacy, the private sector and academia. These mostly do not represent the views of persons within marginalised groups and are not centred around human rights and existing martial law.

No debate on the “ethics” of these systems should be considered serious if the voices of those who risk being most affected are not heard and if the risks remain under-studied. We, therefore, strongly disapprove of the government's approach in not actively involving those voices and the groups and organisations that advocate for them during the REAIM 2023. Furthermore, it is not “responsible” AI that should be given a higher place on the agenda, but the fosterment of human rights - wherein the Dutch government should not question “how to implement military AI responsibly” but rather “how to prevent the use of military AI”. The rhetoric around the need to invest in this domain due to AI's transformative power is solely aimed at outpacing adversaries within this geopolitical AI race which does not account for the risks of these alarming technologies that are set to violate fundamental human rights. Therefore, we urge the Dutch government to actively commit to advocating for the establishment of international laws that regulate these developments and create clear boundaries for governments and companies between what's ethically acceptable and unacceptable.

A first step forward should be to address the significant gaps in regulating AI for military purposes in the AI Act. The current proposed EU Council position on the AI Act does not apply to AI systems developed or used exclusively for military purposes (Article 2) and exempts AI systems developed for national security purposes from oversight and controls. This leaves room for militaries, states, tech companies and manufacturers to govern these applications - implying that the EU overlooks the inherent dangers because innovation prevails over fostering human rights — an ideology that values rapid technological change over the dignity of human lives. Therefore, this open letter, jointly endorsed by the undersigned civil society- and human rights groups, urges the Dutch government to reshape the scope of the debate on responsible military AI threefold.

  1. Firstly, by committing to putting a moratorium on developing AI systems for military purposes until the government has established robust human rights impact management systems and implements clear policies;
  2. Secondly, by putting advocacy efforts in the establishment of international laws that regulate these developments and clarify the exact scope and implications of “military purposes” in Article 2 of the proposed EU Council position on the EU AI Act and eliminate the reference to “national security purposes” from the exemptions;
  3. Lastly, by putting human rights at the centre of these developments by actively consulting civil society groups, human rights groups, experts and marginalised peoples that actively engage with and/or are affected by developments in the subject matter.

Drafted by:
Oumaima Hajri, Racism and Technology Center, and European Center for Not-For-Profit Law (ECNL)

Signed by:
Oumaima Hajri📧
Racism and Technology Center
European Center for Not-for-Profit Law (ECNL)
Bits of Freedom
Waag Futurelab
Controle Alt Delete
Privacy First
European Network Against Racism (ENAR)
Article36
Vredesorganisatie PAX