RDF @ IFF: Collaboratively tackling some responsible data challenges at the Internet Freedom Festival

This blog post is written by the RDF @ IFF session facilitators: Kristin Antin of the engine room and Collin Sullivan of Benetech.

We had a great time facilitating a session dedicated to sharing responsible data challenges and solutions at the Internet Freedom Festival earlier this month! We hosted this session to provide an opportunity for those working on responsible data challenges and questions to share their experiences and knowledge with peers. We also wanted to learn more about the responsible data challenges that this community is facing. We’re glad to report that participants left the session with new resources, new ideas, and new people to contact for advice. This blog post is a summary of the discussions during this session.

We started off by asking the participants to share their current responsible data challenges. These challenges fell into the following four groups:

  1. The need for more standards (e.g. standards for sharing and storing data securely) and resources (e.g. guides and tutorials).
  2. Balancing security and usability when developing new technology tools.
  3. How do we know when we’re achieving real informed consent?
  4. How do we store sensitive data securely on a mobile phone?

So we set out to divide on conquer these challenges by exploring these topics in small groups, trying to achieve the right combination of those who are seeking help and those who can contribute solutions, ideas, and relevant experiences. Here’s what came out of these groups discussions.

More standards and resources, for practitioners

This group discussed the tension between the ability to measure data at a medium or large scale vs. sensitivity to individual people and communities. Specifically we mostly focused on how to build repeatable process when those who will be collecting and storing data have widely varying levels of comfort with various collection mechanisms and how to make data collection serve/provide value to the community.

When looking at standardizing research practices with distributed communities, the group looked at:

  • how to make repeatable processes when each group has its own concept of what is legible
  • collecting data on a large scale, but considering small scale sensitivity
  • standardization of data models
  • measuring the quality of collected data
  • evaluation and monitoring requires sharing data among organizations
  • sometimes, “repeateable” processes that are legible to one group are not legible to another. How do we make them actually repeatable without forcing participants to become experts in data science, security, etc?

When looking at providing value to the community the group discussed:

  • Survey-building can be an iterative snowball sample to determine the survey questions. Ensure that language resonates with the people you are working with (e.g. “freelancer” and “sideman” might be the same thing)
  • How do we address firewalls between communities, service providers, funders? How can community partners know what information should not be shared with the researchers, and what information researchers will not be shared with the funder?
  • What data should we de-identify? What should we not de-identify because you want longitudinal data? For example: satellite pictures of heat blooms in a jungle could reveal a mining operation OR a tribe that wants to stay off the grid

Balancing security and usability, for tool developers

This group discussion was an opportunity for tool developers to learn new ideas and approaches for balancing security and usability. For those developers who host data for human rights organizations and encourage the use of encryption, they are often put in the uncomfortable situation of being asked by the organization who has lost their encryption key, to access the secure data. The developers don’t have the key either, which means the data is not accessible by either party. How can developers address and avoid these situations? How can we make data protection programs easier for users?

Other issues raised by participants in this group included:

  • as a user wanting to start using encryption, it is hard to know what is available and what is recommended.
  • how do we roll out encryption as a team of people?

Real informed consent, for practitioners

This group discussed the challenge of achieving informed consent within the context of service delivery. Participants acknowledged that you cannot achieve real consent when someone feels that they need to provide this in order to receive services. And that it can be really challenging to mitigate this risk, especially when people are already in duress. People in difficult situations may be traumatized and may be more wiling to trade off security for services or skills.

Another important aspect of informed consent is that you cannot achieve real consent when someone is not truly informed about the risks associated with it. It’s hard to talk about data risks and all the ways that people can be exploited through data, especially when we don’t have a full understanding of these risks ourselves and future risks that we don’t even know about yet.

Participants also agreed that it’s too late to begin informing people of data risks when they are asking for the service. We need to be talking to groups beforehand so they understand how they can be exploited using data. We can do better at informing people of their rights regarding their data and privacy, and they should be asking questions of their service providers.

Another responsible data challenge that came up during this group discussion is that the tools for more secure communication is often hard to use, and easy to forget. This dynamic can present a barrier, making it difficult to provide the service in the first place.

Secure data storage on mobile phones, for practitioners

This group discussed potential solutions to address the challenge of storing sensitive data securely on a mobile phone. Let’s say you are collecting evidence and conducting interviews with witnesses of a human rights abuse. You are storing some video and pictures on your mobile phone, but the area that you are in does not have cell phone coverage so you’re unable to send the data anywhere in order to get it off your phone (using mobile Martus or another secure tool). How can you store this information securely on your phone so that there is no trace of this information in case your phone is searched?

Some of the solutions offered included:

  • Set up a small cell tower (Open BTS, GSM cell tower, other). Be sure to research local laws regarding this approach before attempting this.
  • Hide the information. Here are some options:
    • Export the data from your phone to a small SD card, then hide that card.
    • You can go a step further and set up a data partition on the SD card that is encrypted. Have a few normal pictures stored on the card so that it doesn’t look suspicious
    • SQL cypher and other apps already exist

Ideas for future tool/feature development:

  • We could develop a phone app that puts the content/data into an encrypted/hidden place on your phone (photos and videos). When the phone has access to the internet, then it is securely uploaded to a server.
  • Develop an app that uses a defined action trigger that will hide, encrypt or delete specific data in an emergency. Visit the Guardian Project’s Panic Kit for more information.

This is what we were able to come up with in one hour with about 25 people – what have we missed? Share any additional ideas and solutions in the comments section below! And we want to thank all of those who contributed to this session!

This can help you with: No categories
Issue areas: No categories

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload the CAPTCHA.

Published on: 28 Mar 2016
Discussion: Leave a comment