This is a guest post by Mark Latonero, PhD. Mark is a fellow at Data & Society, a nonprofit research institute in New York City that is focused on the social, cultural, and ethical issues arising from data-centric technological development. Mark is also a research professor and research director at the Annenberg Center on Communication Leadership & Policy, University of Southern California.
For some years now, we have seen a number of opportunities and challenges emerging around data-driven applications in the humanitarian, crisis response, and international development fields. During the height of the recent Ebola crisis, there were calls to release mobile phone records to track the outbreak. On the ground, first responders were collecting personally identifiable information on individuals for distribution to other health workers. Responders also saw how some individuals were stigmatized and socially vulnerable. The assumption is that tradeoffs are necessary between values such as privacy and the urgency of rapid response to save lives. Yet outside of crises there is a sense the calculus is different. For a long-term development issue like food insecurity, data on consumption patterns might be collected, analyzed, and shared, but only in aggregate form to protect individual privacy.
The growing variety of scenarios such as these has policy makers, practitioners and researchers seeking to understand the tensions and tradeoffs, risks and benefits of data-driven interventions. Indeed, the advances made in the data for development and humanitarian response arenas have informed related domains that grapple with similar issues – from technical issues over data visualization, predictive analytics, or encryption to social concerns like ethics, governance, and unintended consequences.
Yet from a human rights and human security perspective, knowledge gaps have emerged, particularly for situations that extend beyond crisis response or where aggregate data is inadequate. Take human trafficking, for example, which infringes on fundamental rights, and has broader security implications. What if mobile phone records or online data could be collected and analyzed in real time to locate someone who is being exploited or the perpetrator of the violation? At what point should these individuals be identified, how can the information be verified, how should the data be shared, who should have access, and what would count as evidence in a prosecution?
Such edge cases become more of a central concern in human rights and human security. Unifying concerns give rise to questions that include the following:
Accuracy, Validity, Prediction: How should data analytics be used to make decisions in human rights and security domains?
In fields where careful data analysis is needed to establish facts about rights violations, big data correlations and predictive modeling raise concerns. Biases in sampling and the data itself cloud decisions on whether to act based on such data. What counts as acceptable error for human rights and security? Who is responsible for actions based on inaccurate data or predictions?
Risks, Harms, and Benefits: What risks and benefits frameworks will guide decision makers about data-driven interventions?
New technologies have become valuable tools for shedding light on repressive regimes. But in the wrong hands, they can be used to monitor and repress those who challenge the state. In such cases, who has the authority to make decisions based on data risks and potential harms?
Balancing Immediate Threats and Values: Should values like privacy fluctuate in the face of immediate threats?
Immediate threats to people’s lives can create tensions between protecting privacy interests and facilitating aid efforts. In some cases, the very act of collecting data can put people at risk of violence. How should rights and security officials measure values like data privacy against threats to those they have a responsibility to protect?
Adaptation/Unintended Consequences: How will data collection and monitoring shape the practices of those that repress or advocate for human rights?
Advanced data collection and data mining techniques can alter how human rights situations are monitored and witnessed. Repressive regimes might find new ways to minimize visible evidence of torture while maximizing their own surveillance. What forms of abuses or repression will remain hidden in the digital environment?
Governance and Accountability: How can data regulation, policy, and standards effectively govern and provide accountability?
The use of personal data must be balanced in light of conventions on civil and human rights. As a result, policymakers will need to seek guidance on emerging data ethics, laws, and policies. How will privacy as a human right reconcile with rights to data ownership or freedom of information? When algorithms lead to bad decisions, who is accountable? Are there ways to ensure transparency in such decisions?
Collecting, Sharing and Security: How should human rights and human security data be responsibly collected, stored, secured, and shared?
Digital environments create new vulnerabilities from mishandling and cyberattacks. When inadvertently releasing sensitive data can result in irrevocable harm, the goals of privacy, encryption, and anonymity become essential to forwarding basic human rights. Can encryption tools secure data and communications in rights and security contexts? Should informed consent be reexamined and adapted?
Aggregation vs. Identification: How must the human rights and human security fields interact with data tools and techniques that can identify specific individuals at risk?
Aggregate data can be used to make policy decisions affecting large numbers of people, but in some cases, human rights and security interventions may need to operate at an individual case level. Does the opportunity to provide assistance on an individual basis outweigh the possible consequences of revealing personal information?
Context and Power Dynamics: Can general guidelines and principles be established for data interventions whose social applications are deeply contextual and enmeshed in power relationships?
Each intervention occurs within a particular culture, time, and place, presenting unique challenges. The very introduction of data-centered technologies can both reveal and exacerbate power relations on the ground. It is important to include local voices and concerns into the design and engineering of data-centered interventions, and to ask, whom are data-centered technologies ultimately empowering or disempowering?
At this point, we have more questions than answers, highlighting the need for leadership around responsible data practices and policies in these areas. We do not yet know how data science, computation, and design perspectives might influence traditional legal, interventionist, and protectionist frameworks. “Translation” work becomes essential to addressing these multifaceted issues by working across disciplines and stakeholders to foster baseline understandings and cooperative solutions. Indeed, coordination is especially important as new stakeholders – such as data scientists, technologists, and corporate data holders – are brought together with traditional intergovernmental organizations, NGOs, and government actors. Most importantly, we need a much better understanding of how data driven interventions will impact direct beneficiaries and users at risk in the human rights and security space.