Jan. 30, 2026 Workshop contributions submission deadline
Mar. 1, 2026 Decision notification and OpenReview publication
Apr. 26 or 27, 2026 Workshop date
AI has many historical roots in research for military applications and much of the scientific community remains deeply connected to the surveillance and defense industries. Yet, even today, the military uses of AI are often obscured and many researchers and developers remain unaware of how their work might be deployed in conflicts, and the extent to which they might contribute to intentional harm, including potentially violations of international law.
Although AI in conflict and surveillance is a key topic of public and policy debate, there have been no formal spaces for AI researchers themselves within the main machine learning conferences to articulate and discuss their positions towards the weaponization of their research. The AI for Peace workshop @ ICLR 2026 will be a forum to consider both the harms associated with research dissemination and design decisions, and the opportunities for affirmatively building our research and development agenda from the starting position of non-violent harm prevention, research ethics, and respect for international law, including international humanitarian and human rights law.
We aim to address the critically under-discussed issue of AI’s dual-use nature, focusing on how machine learning technologies are being adapted for military purposes, potentially without the researchers’ knowledge or consent. While attending to the heightened risks associated with particular areas and systems of research, we will also be collectively thinking through what it looks like to engage productively in research and development activities that considers ethics and international law at its core. Our objectives are to:
• Increase transparency about the pipelines through which AI research enters into military and surveillance applications.
• Develop collective strategies to address ethical and legal risks as a community of researchers.
• Highlight and support research efforts that contribute to peace-building applications, including those helping to surface or elucidate harmful applications of AI.
A key avenue of exploration will be to invite parallels between current conversations in AI and similar debates (with longer histories) in other scientific fields, such as genetic biology and nuclear physics; where researchers have grappled with similar ethical challenges and proposed concrete professional responses.
Call for Papers and details:
https://lnkd.in/gExAjKUR