Seven Essential Questions for Ethical War Crimes Documentation


Considerations for those collecting, investigating, and analysing open source information in Ukraine and elsewhere

By Jennifer Easterday, Jacqueline Geis, and Alexa Koenig
This piece was inspired and informed by discussions on the Responsibledata.io listserv and written with support from Julia Keseru, Alicja Peszkowska and Helen Kilbey from The Engine Room team.

As the war in Ukraine enters its fourth month, the urgent need to document international crimes continues. The scope and scale of atrocities and war crimes perpetrated by Putin and his regime have drawn widespread attention and condemnation, as the gruesome realities of the conflict have been broadcast globally minute-by-minute, including through online and mobile devices. It is no surprise, then, that digital evidence of atrocities abounds  – evidence crucial for accountability and transitional justice efforts.

Many diverse actors have undertaken efforts to collect and process that evidence in response. With mobile phones, social media, machine learning models, and satellite imagery, we now have a variety of tools and methods at our disposal to document atrocity crimes. Many of these tools were developed and/or used by activists in Syria, Yemen, Myanmar, and Sudan. In Ukraine, individuals on the ground are using their phones to create photos and videos of potential evidence; open source investigators are searching for online evidence to verify rumours or to identify perpetrators; and everyday people are posting to social media to raise awareness and mobilise.

There has also been a rapid proliferation of documentation and accountability efforts. Iryna Venediktova, Ukraine’s prosecutor general, has already opened more than 8,000 criminal investigations related to the war and has identified over 500 suspects, including Russian politicians, military commanders, and propagandists. The International Criminal Court, the United Nations, the European Union, the Organization for Security and Cooperation and Europe (OSCE), individual countries, private investigation teams and independent human rights groups have all started to put together evidence and investigations – with many being conducted in parallel by lay investigators. 

But the prevalence of digital tools and efforts to document and seek justice for these crimes may ultimately cause additional harm and create unanticipated obstacles to criminal prosecutions. As Karim Khan, Prosecutor of the International Criminal Court, has recently pointed out, the “over-documentation” of war crimes and rights abuses must be avoided, recalling Rohingya rape victims interviewed 10-15 times “in a way that is unacceptable and obscene.”

With the scale and scope of evidence collection and generation happening in Ukraine, the dangers and harms of documentation are significant and require attention and action. Those working on evidence collection and accountability in Ukraine and elsewhere need to ensure that their efforts are ethical, respectful, responsible, and, most importantly, meaningful for affected victims. 

Being responsible when it comes to data isn’t only about complying with a set of predefined rules. It’s about balancing the opportunities against the risks: only collecting what we need (data minimisation) or working with partners who may be better placed to collect the data that’s needed; being clear about the data you keep (and why) and the data you never collect; understanding who might be highlighted in the data you have, and who is missing from it. Centering the source in evidence collection and open source investigations – whether the people represented in the dataset or in front of the camera – can help guide those decisions. 

To ensure the international community and those working on the ground are doing more good than harm, we have detailed a few questions that can help guide journalists, human rights defenders, investigators, and others working on evidence and accountability efforts.

What is your “why”? Collecting for collection’s sake may not be the best course of action. Most public information (and analysis of this information) is rarely viewed. Do we have a responsibility to share and publicise some of this information more widely if it is in the public interest – even if doing so might harm individuals or groups? Journalists and whistleblowers also grapple with the risks posed to individuals when deciding whether to publish source materials, which can put innocent people in danger if it contains sensitive information. It is important to be clear on trade-offs for doing evidence collection and accountability work. How is our work adding value? Are we duplicating other efforts? Where is our niche in the larger investigative effort? In their Responsible OSI Workbook, Gabriela Ivens and Sam Dubberley provide an essential tool for examining our reasons, motivations, and goals for collecting and investigating open source information. 

What is your operating environment? Have you identified and mitigated possible security risks? Collecting, holding, transmitting, and analysing sensitive data comes with multiple security risks. It is critical to remember that all sides in any conflict can access open source information. This desire to protect civilians against the risks of open source information explains why Google disabled Maps features in Ukraine showing live traffic data and how busy places like shops and restaurants are. Here is where that old adage is sound advice: an ounce of prevention is worth a pound of cure. As we determine our “why,” the next set of questions should centre on an assessment of risk: What are the potential unintentional harms if we do this work? How do we think about digital, physical, organisational, and psycho-social security? What is our threat model? Risk assessments should include input from all of our partners, be specific to the situation at hand, and be updated as the situation evolves. In addition to a risk assessment, a crisis management plan or protocol may also be useful and provide action plans for when data breaches or other worst-case scenarios occur. 

Critically, security protocols are only as strong as their implementation, so right-sizing them to the threats faced is key. And as people implement protocols, the well-being of teams and partners underpins all security efforts. Tactical Tech’s Holistic Security guides are an excellent place to start thinking through multi-faceted security planning.  

Do you have informed consent? What is your plan for when that is impossible? With open source data, especially social media posts that are widely shared, it’s not always clear whether the person posting the information or a person mentioned or shown in the post has given informed consent about how that post might be used in accountability procedures. How can we ensure that victims are consenting to their information – whether a photo, social media post, or other personal data – being used for long-term human rights work and criminal accountability? Is consent implied through publicly sharing information on social media? Is informed consent required from victims or bystanders before a post is re-used for human rights or accountability reasons? If so, how would that informed consent be obtained during a crisis? What happens if our human rights work puts those people at risk of further danger? 

Sometimes it will not be possible to get informed consent. Does that mean we should not use that piece of information? How do we balance the rights of the person whose consent is needed with the overarching human rights risks of not sharing it? How do we manage withdrawal of consent? The recently released Outlining a Human-Rights Based Approach to Digital Open Source Investigations provides guidance for open source investigators on consent and related rights, and the Murad Code provides guidance on gaining informed consent from survivors of sexual and gender-based violence.  

How are you managing expectations, given that not all of this evidence will lead to prosecution or conviction? How do we explain to victims and others interested in justice that even though we are collecting this information, they may never see the justice they personally seek? Many factors go into deciding whether and how to bring cases for international crimes or start a human rights campaign. Sometimes that means prosecutors will focus on a specific group of crimes but not others, a certain group or level of military commander, or a particular geographical location. Unfortunately, these difficult decisions mean that evidence  – often generated at great personal risk and painstakingly collected or obtained  – may never see the inside of a courtroom or be used in any human rights work. Moreover, indicted accused may remain out of reach of international justice – in the case of Russia’s war in Ukraine, for example, perpetrators could do this by simply staying in Russia. This makes it critical to continuously manage the expectations of those who are sharing evidence with professional investigators, prosecutors, or human rights groups. 

Maximising the potential of digital evidence requires managing expectations about evidentiary processes as well. Digital evidence needs to be verifiable and authenticatable, and, if used in court, usually requires that someone can testify before a judge about the context in which it was collected. However, the relevant people may be impossible to find. Some evidence may be thrown out. And many people may be unwilling to go to court  – witnesses may fear repercussions for testifying or wish to avoid the risk of re-traumatisation. How can this be explained and managed in the midst of a crisis situation?

Are you taking steps to avoid victim fatigue and re-traumatisation? Having so many different actors conducting separate investigations can cause re-traumatisation and interview fatigue, as with the Rohingya rape victims mentioned above. It can also interfere with ongoing legal investigations (for example, when a reporter interviews a survivor, who then becomes potentially tainted as a witness by having a narrative about their experiences circulating that may compete with their testimony in court). Interviewing victims is highly specialised work and should only be done by experts. And those experts who conduct such interviews must be aware of who else is conducting parallel investigations and how their work potentially complements and/or creates problems for each other. How can over-documentation and interview fatigue of survivors and affected individuals be avoided?

What steps are you taking to understand and avoid bias? Having access to digital investigation tools, the internet, phones, and channels for posting or communicating that material reflects certain privileges of access that can bias our investigations. The fact that we are focusing here on Ukraine and not the massacre in Tigray, Ethiopia, is a glaring example of this bias. Machine learning and algorithmic biases can also taint our evidence collection, what we are able to see and access, and whether it is removed automatically from social media before it can be archived as evidence. Personal and cultural biases also impact how we collect, assess, and analyse data. Open source investigators need to understand potential biases and take steps to minimise the extent to which our online investigations ensure that justice is served for those who are most affected, not just those who are most privileged.

Are you planning for long term evidence storage and maintenance? Justice for international crimes can be painstakingly slow, meaning that evidence collected in Ukraine may not be used in court proceedings for years. Similar timelines exist for information destined for longer-term advocacy, policy, reconciliation, or historical memory projects or for inclusion in the historical record. For this evidence to be as meaningful and useful as possible, it needs to be properly preserved. Proper preservation means thinking through the challenges of storage, developing access and use protocols, and understanding the evolving security considerations both for the full archive and for the individual pieces of evidentiary data. It also means developing models for consent, use, and security that both take into consideration those most affected and move beyond personal relationships. 

These are questions that the responsible data community is asking, and which we must all be answering, together. 

Resources:

Jennifer Easterday, JD, is the co-Founder and Executive Director of JustPeace Labs, a 501(c)(3) organization that advocates for and supports the responsible use and deployment of digital technology in high-risk settings. Her prior work with NGOs and international tribunals focused on strengthening international responses to armed conflict and mass human rights abuses.

Jacqueline Geis is the Chief Executive Officer of Videre, where she has spent the past decade building it into a highly respected, Skoll Award-winning human rights documentation organization.  Prior to joining Videre, she built and led work on justice and human rights at the Brookings Institution, the US State Department, the UN International Criminal Tribunal for the Former Yugoslavia, the International Budget Project, and the American Bar Association. 

Alexa Koenig, JD, PhD, is executive director of UC Berkeley’s Human Rights Center and a lecturer at UC Berkeley’s Schools of Law and Journalism. She co-founded HRC’s Investigations Lab and was heavily involved in the development of the Berkeley Protocol on Digital Open Source Investigations, which was published with the United Nations Human Rights Office in 2020. 


Latest Event / View Past Events

/ ON March 29, 2016 / San Francisco

Human Rights documentation

How are new types of data changing the way that we document human rights violations?

How the use of biometrics in the humanitarian sector has the potential to put people at risk – an interview with Belkis Wille


Back in March, Human Rights Watch (HRW) published a report indicating that the Taliban control the systems holding sensitive biometric data that Western donor governments left behind in Afghanistan. We spoke to Belkis Wille, a senior researcher at HRW, about the report’s findings and how the use of biometrics in the humanitarian sector has the potential to put people at risk.

Using data can reinforce existing power asymmetries

Data is often celebrated for its potential to improve the efficiency and accountability of humanitarian agencies. But the use of data (especially biometric data) in the context of emergency response may end up reinforcing existing power asymmetries between those in need and those who distribute help. “I have seen this very transactional dynamic being created where one person is giving another person a tent to sleep in, and in return getting their iris scanned.” Belkis Wille told us about how her experiences working in humanitarian response in Yemen, Syria, and Iraq informed her perspectives on responsible data: from questioning the complexities of meaningful, informed consent in these contexts to noticing the frequent lack of accountability for the agencies and institutions working with data in these contexts. 

Intrusive data collection and inadequate protections mean real risks to the Afghan people

The latest HRW report examined six digital identity and payroll systems built in Afghanistan for or with assistance from foreign governments and international institutions. The systems contain Afghans’ personal and biometric data, including iris scans, fingerprints, photographs, occupation, home addresses, and names of relatives. Much of this data ended up in the hands of the Taliban, endangering the Afghan people. HRW’s report states that researchers have not been able to ensure that international donors such as the US government, the European Union, the UN, and World Bank have taken any steps to ensure that people whose data was collected are informed of what has happened to their data and what the consequences of that could be. Additionally, researchers note that the data could be used to target perceived opponents (and research suggests that this may already be happening).

Documenting data-related harm is an important step

Belkis believes that documenting cases of data misuse causing tangible harm is a fundamental step toward improving the sector’s data practises: “I don’t think anyone in the humanitarian space necessarily intends to do harm. If we can show how harm is being done, we can potentially very successfully advocate for better practices.” Together with her team at HRW, Belkis is working on cataloguing and publishing human rights violations resulting from the lack of data protection standards and thorough risk assessment in the case of Rohingya refugees, in Kenya, Jordan, Iraq, and most recently, Afghanistan. All of this research and reporting forms a part of HRW’s broader advocacy strategy around INGOs and state donors

Where do we go from here?

The report calls on donors to develop a set of procedures for the work in similar contexts, including improved procedures for data collection as well as the potential destruction of sensitive data. It also urges donors to mitigate risks in countries where similar systems have been built, disclose what exactly has (or has not) been done in order to protect the data, and follow up with the people affected. HRW also recommends that the United Nations High Commissioner for Refugees (UNHCR) and countries considering Afghan refugee claims take the increased vulnerabilities and risks caused by the Taliban’s capture of asylum-seekers’ personal data into account when processing asylum claims. 

If you’d like to learn more about responsible data collection in the context of humanitarian response, explore the links below. If you’d like to connect with Belkis or keep up with her work, you can follow her on Twitter.

Recommended Links:

This interview first appeared in the Mission Responsible newsletter #14. Subscribe to receive regular updates from us and keep an eye out for more interviews like this in the months to come!

The featured image is “Human Shapes” by Dr. Matthias Ripp CC BY 2.0 and was found via Flickr.


Latest Event / View Past Events

/ ON March 29, 2016 / San Francisco

Human Rights documentation

How are new types of data changing the way that we document human rights violations?