Digital identification systems: responsible data challenges and opportunities


/ July 5, 2017
Last week, I had the pleasure of being part of the Ethics and Risks working group at the 2017 ID2020 Platform for Change Summit. This event focused on issues faced by people living without recognised identification, and brought together multiple stakeholders, public and private, to discuss these challenges.

Through the working group, we discussed challenges, opportunities and aims for addressing the the ethical challenges that come from digital identification systems. I found it to be a fascinating challenge to apply many of the general lessons learned from our Responsible Data work to a very specific problem. Below are just some of the key points that stood out for me, as a non-expert in identification systems.

For context, our group was one of a few different working groups, some of which were focused on specific issues mentioned below – like security or legal regulations. To avoid duplication, we focused on issues that fell explicitly within “ethics and risks” rather than issues that other groups might be addressing.

Main takeaways

  • Ultimately, we need to think about the power dynamics at play in digital identification systems. Who makes the decisions about whom; whose voices are heard, and how are they affected by the system in place? Who controls the infrastructure upon which our identification systems are being built, where are they based, and what dependencies does that create?
  • For people to be able to give meaningful informed consent, they need to be able to opt out without losing access to services. What kinds of systems need to be in place alongside digital identification systems, for that to be a possibility?
  • Practising data minimisation: what would a digital identification system look like which didn’t collect and aggregate people’s biometric data, or where people’s sensitive personal data isn’t gathered in one centralised database?
  • Balancing contextual differences, with bespoke systems: how can we both make the most of technical expertise from around the world, while designing for different contexts? Existing informal systems and different cultures and contexts will deeply affect how digital identification systems are used and perceived.

Unintended consequences (and how to mitigate them)

Many of the discussions around the risks that could arise from digital ID systems centred around unintended consequences: what harmful effects could a digital identification system have on society at large, or a particular group of people?

Surveillance

One of the most obvious ones centred around surveillance. Creating a digital identification system that is comprehensively used by a particular population might mean gathering a huge amount of data about those individuals – their behaviours, networks, likes and dislikes. Holding that amount of information creates potential for misuse.  One participant suggested imagining if a formerly democratic government with a comprehensive digital identification system fell into the hands of an authoritarian regime. As history tells us, holding easy-to-access data on particular communities can facilitate human rights abuses.

As a result, practising data minimisation is key: that is, designing the system so it functions with the fewest amount of data points necessary, and as a result, only collecting the very minimum amount of data needed for the system to work as intended.

Related to this is intentional misuse of the system, whether that be from those who at some point in the future come to control the system, or from outsiders looking to break that system. With the former, data minimisation becomes even more important, as does careful decision-making around how the data is stored (eg. encrypted, without anyone administering the system being able to access or see the data itself) and how the system operates itself.

Creating a massive and supposedly “secure” system may well be considered a target for black hat hackers wanting to see what they can break. A way of mitigating this might be carrying out regular penetration tests, where security experts are asked to try to break the system. For these tests to be effective, time and resources for regular system and security updates would also need to be built in.

Power dynamics

In the tech sector at large, power is already incredibly concentrated on just a few tech giants. The infrastructure upon which an identification system is built also creates many technical and social dependencies, and further contributes to that concentration of power. It’s often international corporations who have the technical skills to build such systems. But while companies can offer technical resources often far beyond state actors who are looking to develop identification systems, partnering could build in a dependence on a particular company.

As a worst-case scenario, a single company or group of companies could take the lead in developing infrastructure and build up somewhat of a monopoly. If these systems are used to administer or manage public services, governments working in partnership with these companies would be entirely dependent on the company in question to keep administering and running said infrastructure in order to offer those public services.

This kind of relationship isn’t new: similar power dynamics are present in public-private partnerships generally. But those dependencies are amplified in the case of identification systems, particularly where governments might not hold the knowledge to run the technical infrastructure themselves, nor the incentives or motivation to develop that capacity in-house.

To avoid handing over too much power to companies running these systems, one option could be developing capacity building or training programmes from the beginning of the project, with the long term goal of eventually handing over management of the system to the public body in question. It’s another question as to whether companies would want to hand over that power, or rather, what incentives could be offered to encourage them to do so.

Societal effects

Though digital identification systems might be addressing an easily visible need, there might be other needs or systems at play, and new systems might cause disruption of effective informal systems that have been developed around establishing trust within communities. From the state level, these informal systems might seem ineffective, but in actual fact, these could be incredibly efficient systems which are illegible to those outside of a particular community. One way of avoiding unnecessary disruption might be working closely with researchers and members of the communities in question, to ensure that the identification system is designed with safeguards around existing systems, or uses those same values in a larger system.

For people participating in digital identification systems, what does it mean for them to give meaningful informed consent? For an individual to give informed consent, it must be voluntary – which means they must be able to opt out, without losing access to key services. For this to be true, there needs to be an alternative (potentially analogue) system in place that would afford the individual the same access as the digital system.

As such, continuing to develop parallel, alternative systems alongside digital identification systems will be an important part of ensuring true informed consent can really be given.

Opportunities

Designing intentionally for vulnerable communities

As mentioned above, one issue that comes up often in responsible data analyses is that of power. Who holds the least amount of power in a particular situation, and how are their needs being addressed?

In the case of identification systems (and, more generally speaking, technical systems built by a certain group of people with particular biases and behaviours) – it’s important to consider the needs and use cases of marginalised communities. Actively designing for these cases for marginalised groups would avoid unintended negative consequences further down the line, especially taking into account the fact that accountability structures are often very weak for these communities.

In too many situations that we’ve all seen, needs of vulnerable communities come as an afterthought to the technical design, which means that their needs are addressed through last-minute tweaks rather than active, intentional design choices.

Fluidity of identity

Contrary to the narrative often offered of one, single unique identity to rule them all as the most powerful format that an identification system could take, the opposite can also be true. Holding multiple identities, and being able to smoothly switch between them is very empowering.

Think, for example, of non-gender binary people, or people who might want to change parts of their identity as they move within society or different communities. If designed thoughtfully and with this use in mind, digital identification systems could allow for smoother transitions between identities, allowing individuals to define themselves by what they consider to be important, and change when they want to perform different identities.

Streamlining systems

Digital systems, if designed and implemented responsibly, hold the possibility of providing more transparency to the individual, as well as making transactions requiring trust between particular parties, easier. Analogue identification cards can be lost, and sometimes, not very easily replaced without layers of often confusing bureaucracy. Digital systems could be designed in such a way that a particular item being lost could be more easily replaced, or an identification could be verified through more ways than simply showing a card.

Biometric data seems to be playing a role in the design and implementation of some digital identification systems, but the way in which extremely valuable and sensitive biometric data is stored and used, could be developed to be more secure. Or, it could be used as a safeguard in case of loss of a particular item, rather than as a core feature of identification systems.

Group objectives

In response to the challenges and opportunities mentioned above, our group came up with a few concrete objectives for ID2020. These were:

  • Designing for marginalised communities: as mentioned above, intentionality in considering the needs of vulnerable groups is crucial to avoid unintentionally harming anyone, or exacerbating existing inequalities.
  • Holding ourselves to higher standards: in some cases, legal regulations might not have caught up to the ethical challenges faced by establishing digital identification systems. In those cases, where we can identify a ‘red line’ that we shouldn’t cross or an issue we should consider, even if it isn’t necessarily mandated by law, we should put the bar for ourselves higher.
  • Prioritising rights of individuals over rights of institutions: it was noted that sometimes companies, governments or other institutions are more strongly represented than those representing individuals, which leads to skewed priorities and perspectives. As a result, prioritising the rights of individuals must come first.
  • Using deliberative democratic systems for meaningful user engagement: rather than simply designing the system with people who are in positions of power, processes could be designed where individual users (or a group of individuals) have the opportunity to meaningfully give feedback in a way that is taken on board and listened to.
  • Developing threat models to proactively plan for risks against individuals: in order to intentionally plan to mitigate risks, we must first know what those risks are. One suggestion was to develop a series of threat models (which would likely be different for different individuals in different cultures/countries/societies) to highlight those risks.
  • Building in systems for accountability within the infrastructure: no matter how hard they try, those designing identification systems will not be able to foresee all the potential harms or risks that might arise. As such, including processes where individuals (particularly those from marginalised groups) can flag a problem or challenge that they are experiencing, and have their voice be heard by decision makers, is crucial.

Conclusion

This post is a non-exhaustive analysis of just some of the major issues that arise around digital identification systems. There are many people, institutions and organisations thinking carefully and more comprehensively about the implications, opportunities and risks of digital identification systems – and I look forward to following along with those discussions in the future.

For further reading, check out the resources below, and if you have others to add, get in touch on post@theengineroom.org or directly on zara@theengineroom.org.

Further reading

About the contributor

Zara is a researcher, writer and linguist who is interested in the intersection of power, culture and technology. She has worked in over twenty countries in the field of information accessibility and data use among civil society. She was the first employee at OpenOil, looking into open data in the extractive industries, then worked for Open Knowledge, working with School of Data on data literacy for journalists and civil society. She now works with communities and organisations to help understand how new uses of data can responsibly strengthen their work.

See Zara's Articles

Leave a Reply


Related /

/ July 26, 2022

Takeaways from our Community Call on Responsible Data for Social Justice Organisations

/ February 15, 2022

Responsible Data Community Retrospective series: 12 articles about the state of the Responsible Data work

/ March 2, 2021

Curating connections in the Responsible Data community