Responsible Open Source Investigations for Human Rights Research

Gabriela Ivens

When so much is possible, drawing a line between can and should becomes essential

When I started working with open source information I knew it under different names – ‘citizen evidence’, ‘publicly available data’ or ‘user-generated content’. But regardless of what it was called, I was drawn to the idea that some of the same techniques and data sources used by companies and governments to collect information on individuals could be flipped around to expose abuses of power. In theory, all that was needed was an internet connection, some tools and a creative mindset. 

Working on a project called Exposing the Invisible, we highlighted different approaches, technologies and projects that either made use of public data or highlighted ways, both digital and physical, to create data using low-cost and accessible means. I learned about projects that used cameras attached to kites to capture higher resolution imagery of towns where none had existed before, investigations that used flight spotting and tracking websites to track planes being used to clandestinely deport people, and techniques to analyse websites in order to link particular companies. 

At the same time as researchers and tool developers were increasingly testing the boundaries of what was possible, a community was forming around questions of ethics and responsibility in this type of work. The 2016 collaboratively-written DatNav report, for example, featured guidance on navigating some of the important ethical dilemmas of the time, and in 2017 a Responsible Data Forum gathered others interested in these questions.

Nowadays, open source data is used in a range of ways, and in different types of investigation. One of the most well-known of these involves researchers investigating crisis and conflict situations from afar, predominantly through visual data. By knowing how to analyse videos and photographs shared online alongside satellite imagery, researchers are able to determine exact locations, dates and impacts of particular violations. 

The benefits of being able to gain this kind of knowledge are clear, but these new and increasingly widespread capabilities also give rise to significant concerns around issues like what is being covered, what stories and data are being missed, how to gain meaningful consent to publish findings and re-publish content, what to do if consent is not possible (without it, how can you know you aren’t retraumatising those captured in, or those who captured, a video?), who is credited in the investigative process, and who has and does not have access to the investigative techniques, data and tools being used. 

In a chapter I wrote with Zara Rahman for the book Digital Witness, titled Ethics in Open Source Investigations, we explored questions around who is able to do these kinds of investigations and who is getting credit for them, and looked at different elements of harm. Like many others, we don’t believe it’s possible to always ‘do no harm’ (a framework often applied to human rights research) with this type of fast-moving work in an ever-changing landscape, especially when the work involves making decisions for others from afar – but we do need to put extra work into reducing the harm that can be caused.

This applies both to those who have captured evidence or who appear in evidence, and to those working on investigations. In work I’ve done in the past that focused on videos and photographs of human rights violations, I sought out and developed individual strategies to reduce and manage the risk of vicarious traumatisation when viewing distressing content, learning from others in this space such as the Eyewitness Media Hub, Sam Dubberley and Amnesty International’s Digital Verification Corps, who have built this out in their programmes. 

I’ve learnt that while individual strategies are useful, they can only go so far. For those working in organisations, there needs to be an organisational commitment to, and a system-wide approach toward, enabling people to work sustainably with this kind of material. This is the path we’re taking at Human Rights Watch as part of the work of our internal Stress and Resilience Task Force, through devising an organisational approach to safeguarding the wellbeing of our staff and those we work with when working with distressing material. 

While collaborating with the Syrian-led NGO Mnemonic on their Syrian Archive project, I learned the importance of having contextual knowledge of both the context and of those who have been doing the work of capturing videos and photographs. The Syrian Archive’s founder, Hadi Al Khatib, often spoke about how essential it was to do open source work in collaboration with those who were working in Syria, and, when possible, those who recorded the videos the organisation was working with.

Documenting human rights violations generally works best, in my experience, when you combine as many different types of data as possible with knowledge of the place and context in which you are working. When it comes to using and publishing open source data, I find that focusing on the source – whether that be the person represented in the dataset you have, the person behind or in front of the camera, the whistleblower, or the person represented in a data leak – can help to guide decisions. 

Looking forward

In a field that relies on technology and publicly available data, what it means to conduct responsible open source investigations is constantly changing. Undoubtedly, I will look back in six years’ time and the types of data available, the ways in which data could be used, and the norms around using that data will have evolved. With thought, care, and clear standards, this evolution can happen in a rights-respecting way.  

Right now, taking a human rights based approach helps to guide me through these difficult questions, using human rights principles such as proportionality and necessity, alongside other related approaches such as duty of care, harm minimisation, and radical empathy (for more on how this can be applied in open source investigations, see an article I wrote with Sophie Dyer: What would a feminist open source investigation look like?). In working on these questions with Zara Rahman, we used the maxim ‘the ends can not justify the means as a touchstone to help guide the use of these techniques in human rights research. 

Some people currently see open source investigations as somewhat magical. This can lead to putting the practice on a pedestal, or believing it lies out of reach – that a person couldn’t  do these kinds of investigations themselves.  But these techniques should not be seen as magic – they should be reviewed, questioned, and replicated. 

Due to restrictions in movement over the past year, much human rights research has of necessity moved online, leading researchers to further develop their skills in working with open sources of data. My hope is for these techniques to become more normalised and less sensationalised, and that they will become part of a researcher’s standard  toolbox, ready if and when needed. 

The reason I started in this area of work is the same reason I continue – in the hopes that these techniques will be accessible to as wide a group of people as possible, in order to continue to expose human rights abuses. 

Further reading