Seven Essential Questions for Ethical War Crimes Documentation


/ June 1, 2022
Kevin Dooley, CC BY 2.0 via openverse

Considerations for those collecting, investigating, and analysing open source information in Ukraine and elsewhere

By Jennifer Easterday, Jacqueline Geis, and Alexa Koenig
This piece was inspired and informed by discussions on the Responsibledata.io listserv and written with support from Julia Keseru, Alicja Peszkowska and Helen Kilbey from The Engine Room team.

As the war in Ukraine enters its fourth month, the urgent need to document international crimes continues. The scope and scale of atrocities and war crimes perpetrated by Putin and his regime have drawn widespread attention and condemnation, as the gruesome realities of the conflict have been broadcast globally minute-by-minute, including through online and mobile devices. It is no surprise, then, that digital evidence of atrocities abounds  – evidence crucial for accountability and transitional justice efforts.

Many diverse actors have undertaken efforts to collect and process that evidence in response. With mobile phones, social media, machine learning models, and satellite imagery, we now have a variety of tools and methods at our disposal to document atrocity crimes. Many of these tools were developed and/or used by activists in Syria, Yemen, Myanmar, and Sudan. In Ukraine, individuals on the ground are using their phones to create photos and videos of potential evidence; open source investigators are searching for online evidence to verify rumours or to identify perpetrators; and everyday people are posting to social media to raise awareness and mobilise.

There has also been a rapid proliferation of documentation and accountability efforts. Iryna Venediktova, Ukraine’s prosecutor general, has already opened more than 8,000 criminal investigations related to the war and has identified over 500 suspects, including Russian politicians, military commanders, and propagandists. The International Criminal Court, the United Nations, the European Union, the Organization for Security and Cooperation and Europe (OSCE), individual countries, private investigation teams and independent human rights groups have all started to put together evidence and investigations – with many being conducted in parallel by lay investigators. 

But the prevalence of digital tools and efforts to document and seek justice for these crimes may ultimately cause additional harm and create unanticipated obstacles to criminal prosecutions. As Karim Khan, Prosecutor of the International Criminal Court, has recently pointed out, the “over-documentation” of war crimes and rights abuses must be avoided, recalling Rohingya rape victims interviewed 10-15 times “in a way that is unacceptable and obscene.”

With the scale and scope of evidence collection and generation happening in Ukraine, the dangers and harms of documentation are significant and require attention and action. Those working on evidence collection and accountability in Ukraine and elsewhere need to ensure that their efforts are ethical, respectful, responsible, and, most importantly, meaningful for affected victims. 

Being responsible when it comes to data isn’t only about complying with a set of predefined rules. It’s about balancing the opportunities against the risks: only collecting what we need (data minimisation) or working with partners who may be better placed to collect the data that’s needed; being clear about the data you keep (and why) and the data you never collect; understanding who might be highlighted in the data you have, and who is missing from it. Centering the source in evidence collection and open source investigations – whether the people represented in the dataset or in front of the camera – can help guide those decisions. 

To ensure the international community and those working on the ground are doing more good than harm, we have detailed a few questions that can help guide journalists, human rights defenders, investigators, and others working on evidence and accountability efforts.

What is your “why”? Collecting for collection’s sake may not be the best course of action. Most public information (and analysis of this information) is rarely viewed. Do we have a responsibility to share and publicise some of this information more widely if it is in the public interest – even if doing so might harm individuals or groups? Journalists and whistleblowers also grapple with the risks posed to individuals when deciding whether to publish source materials, which can put innocent people in danger if it contains sensitive information. It is important to be clear on trade-offs for doing evidence collection and accountability work. How is our work adding value? Are we duplicating other efforts? Where is our niche in the larger investigative effort? In their Responsible OSI Workbook, Gabriela Ivens and Sam Dubberley provide an essential tool for examining our reasons, motivations, and goals for collecting and investigating open source information. 

What is your operating environment? Have you identified and mitigated possible security risks? Collecting, holding, transmitting, and analysing sensitive data comes with multiple security risks. It is critical to remember that all sides in any conflict can access open source information. This desire to protect civilians against the risks of open source information explains why Google disabled Maps features in Ukraine showing live traffic data and how busy places like shops and restaurants are. Here is where that old adage is sound advice: an ounce of prevention is worth a pound of cure. As we determine our “why,” the next set of questions should centre on an assessment of risk: What are the potential unintentional harms if we do this work? How do we think about digital, physical, organisational, and psycho-social security? What is our threat model? Risk assessments should include input from all of our partners, be specific to the situation at hand, and be updated as the situation evolves. In addition to a risk assessment, a crisis management plan or protocol may also be useful and provide action plans for when data breaches or other worst-case scenarios occur. 

Critically, security protocols are only as strong as their implementation, so right-sizing them to the threats faced is key. And as people implement protocols, the well-being of teams and partners underpins all security efforts. Tactical Tech’s Holistic Security guides are an excellent place to start thinking through multi-faceted security planning.  

Do you have informed consent? What is your plan for when that is impossible? With open source data, especially social media posts that are widely shared, it’s not always clear whether the person posting the information or a person mentioned or shown in the post has given informed consent about how that post might be used in accountability procedures. How can we ensure that victims are consenting to their information – whether a photo, social media post, or other personal data – being used for long-term human rights work and criminal accountability? Is consent implied through publicly sharing information on social media? Is informed consent required from victims or bystanders before a post is re-used for human rights or accountability reasons? If so, how would that informed consent be obtained during a crisis? What happens if our human rights work puts those people at risk of further danger? 

Sometimes it will not be possible to get informed consent. Does that mean we should not use that piece of information? How do we balance the rights of the person whose consent is needed with the overarching human rights risks of not sharing it? How do we manage withdrawal of consent? The recently released Outlining a Human-Rights Based Approach to Digital Open Source Investigations provides guidance for open source investigators on consent and related rights, and the Murad Code provides guidance on gaining informed consent from survivors of sexual and gender-based violence.  

How are you managing expectations, given that not all of this evidence will lead to prosecution or conviction? How do we explain to victims and others interested in justice that even though we are collecting this information, they may never see the justice they personally seek? Many factors go into deciding whether and how to bring cases for international crimes or start a human rights campaign. Sometimes that means prosecutors will focus on a specific group of crimes but not others, a certain group or level of military commander, or a particular geographical location. Unfortunately, these difficult decisions mean that evidence  – often generated at great personal risk and painstakingly collected or obtained  – may never see the inside of a courtroom or be used in any human rights work. Moreover, indicted accused may remain out of reach of international justice – in the case of Russia’s war in Ukraine, for example, perpetrators could do this by simply staying in Russia. This makes it critical to continuously manage the expectations of those who are sharing evidence with professional investigators, prosecutors, or human rights groups. 

Maximising the potential of digital evidence requires managing expectations about evidentiary processes as well. Digital evidence needs to be verifiable and authenticatable, and, if used in court, usually requires that someone can testify before a judge about the context in which it was collected. However, the relevant people may be impossible to find. Some evidence may be thrown out. And many people may be unwilling to go to court  – witnesses may fear repercussions for testifying or wish to avoid the risk of re-traumatisation. How can this be explained and managed in the midst of a crisis situation?

Are you taking steps to avoid victim fatigue and re-traumatisation? Having so many different actors conducting separate investigations can cause re-traumatisation and interview fatigue, as with the Rohingya rape victims mentioned above. It can also interfere with ongoing legal investigations (for example, when a reporter interviews a survivor, who then becomes potentially tainted as a witness by having a narrative about their experiences circulating that may compete with their testimony in court). Interviewing victims is highly specialised work and should only be done by experts. And those experts who conduct such interviews must be aware of who else is conducting parallel investigations and how their work potentially complements and/or creates problems for each other. How can over-documentation and interview fatigue of survivors and affected individuals be avoided?

What steps are you taking to understand and avoid bias? Having access to digital investigation tools, the internet, phones, and channels for posting or communicating that material reflects certain privileges of access that can bias our investigations. The fact that we are focusing here on Ukraine and not the massacre in Tigray, Ethiopia, is a glaring example of this bias. Machine learning and algorithmic biases can also taint our evidence collection, what we are able to see and access, and whether it is removed automatically from social media before it can be archived as evidence. Personal and cultural biases also impact how we collect, assess, and analyse data. Open source investigators need to understand potential biases and take steps to minimise the extent to which our online investigations ensure that justice is served for those who are most affected, not just those who are most privileged.

Are you planning for long term evidence storage and maintenance? Justice for international crimes can be painstakingly slow, meaning that evidence collected in Ukraine may not be used in court proceedings for years. Similar timelines exist for information destined for longer-term advocacy, policy, reconciliation, or historical memory projects or for inclusion in the historical record. For this evidence to be as meaningful and useful as possible, it needs to be properly preserved. Proper preservation means thinking through the challenges of storage, developing access and use protocols, and understanding the evolving security considerations both for the full archive and for the individual pieces of evidentiary data. It also means developing models for consent, use, and security that both take into consideration those most affected and move beyond personal relationships. 

These are questions that the responsible data community is asking, and which we must all be answering, together. 

Resources:

Jennifer Easterday, JD, is the co-Founder and Executive Director of JustPeace Labs, a 501(c)(3) organization that advocates for and supports the responsible use and deployment of digital technology in high-risk settings. Her prior work with NGOs and international tribunals focused on strengthening international responses to armed conflict and mass human rights abuses.

Jacqueline Geis is the Chief Executive Officer of Videre, where she has spent the past decade building it into a highly respected, Skoll Award-winning human rights documentation organization.  Prior to joining Videre, she built and led work on justice and human rights at the Brookings Institution, the US State Department, the UN International Criminal Tribunal for the Former Yugoslavia, the International Budget Project, and the American Bar Association. 

Alexa Koenig, JD, PhD, is executive director of UC Berkeley’s Human Rights Center and a lecturer at UC Berkeley’s Schools of Law and Journalism. She co-founded HRC’s Investigations Lab and was heavily involved in the development of the Berkeley Protocol on Digital Open Source Investigations, which was published with the United Nations Human Rights Office in 2020. 

About the contributor

Alicja is a consultant, researcher, and a participation strategist interested in technology, culture, and social change. Her experiences include setting up digital communications and participation strategies for companies, social organizations, and cultural institutions such as the LEGO Foundation, Liminal, Danish Statens Museum for Kunst, New Economy Organisers Network, TechSoup, and an artists run gallery Vi lever på polsk.

See Alicja's Articles

One thought on "Seven Essential Questions for Ethical War Crimes Documentation"

Leave a Reply


Related /

/ December 15, 2022

Building an embodied approach to data – an interview with Anja Kovacs

/ October 11, 2022

Exploring fundamental flaws behind digital consent  – an interview with Georgia Bullen

/ May 24, 2022

How the use of biometrics in the humanitarian sector has the potential to put people at risk – an interview with Belkis Wille