Social media intelligence, the wayward child of open source intelligence


/ December 12, 2016

This is a guest post by Millie Graham Wood, Legal Officer at Privacy International

Would you mind if, every time you post a comment on Twitter, Facebook or another social media platform, the police logged it? I mean, it’s public – surely it’s fair game?

If you think that’s OK, then maybe it’s also OK for a police officer to follow you when you walk down a busy street. That’s also public, right?

Clearly, definitions of public and private become very problematic when you are communicating with potentially thousands of people online. The question becomes:

Are our social media posts ‘open source’ and therefore ‘open season’?

Although often conflated, Open Source Intelligence (OSINT) and Social Media Intelligence (SOCMINT) are distinct:

  • OSINT is intelligence collected from publicly available sources, including the internet, newspapers, radio, television, government reports and professional and academic literature.
  • SOCMINT can be defined as “the analytical exploitation of information available on social media networks”.

Evanna Hu is right to identify areas of concern in the use of OSINT. However, Privacy International suggests viewing SOCMINT as a distinct concept. The surveillance of social media should be removed from the definition of, and discussion about, OSINT. Instead, it should be treated as an issue its own right. This would ensure more specific regulation, policies and safeguards that take into account the very unique and specific nature of social media: a privately-owned space (i.e. owned by private companies) where people share their thoughts.

It is through social media that we express our views, our opinions and our sense of belonging to communities. To permit unconstrained monitoring of social media by the police and intelligence agencies is to give them a deep understanding of our social interactions, our politics, our habits, our location and our daily lives, even if we are not suspected of any wrongdoing whatsoever.

In her analysis, Hu identifies some key concerns relating to OSINT: the volume and reliability of data; the sensitivity of information; and the ability to identify individuals despite their attempts at anonymisation. These concerns are also relevant to SOCMINT. However, the ability to monitor millions of social media accounts and hashtags in real time, and to then analyse and store this data, is a concern unique to social media.

We need to challenge the argument by law enforcement agencies that this is an inexpensive strategy with  little impact on people’s privacy because it relies only on so-called publicly available (i.e. non-private)  information. This public/private distinction is deeply problematic. It is arguable that a tweet is not private because, by it’s nature, you cannot control its audience. However, that does not automatically make it public, or within the purview of the police. Social media does not easily fit into either the category of public or private. We would argue that it is instead a pseudo-private space, where there is an expectation of privacy from the state.

Examples of the use of SOCMINT have often come to light as a result of freedom of information requests and campaigning rather than government transparency. What is remarkable is that the consequences traditionally associated with mass surveillance of communications – such as self-censorship, targeting of certain ethnic groups, clamping down on political opposition – also apply in the context of SOCMINT.

A study from the Norwegian Board of Technology asked Norwegians if police should be monitoring open social media platforms. 40% of respondents thought they should, but 40% also said it would stop them from using words that they would expect to be monitored.

Examples of companies and law enforcement using social media to profile us are increasing on a near-daily basis:

  • The website Score Assured illustrates well the drift towards dystopian methods of gathering evidence in the use of SOCMINT by private companies. The startup aims to create a tool for landlords and employers to check the profile of their prospective tenants and employees based on their social network activity. After requesting consent from the prospective tenant or employee (which they might feel compelled to give if they want that apartment or that job), the software gathers all the information from their social media accounts and makes assessments about the reliability of a person.
  • In a disturbing example of potential consequences of use of SOCMINT, the company ZeroFOX monitored #BlackLivesMatter protesters during the funeral of Freddie Gray, a 25 year-old African American who died in police custody. Based on their social media analysis, they produced a report labelling organisers so-called ‘threat actors’ for whom ‘immediate response is recommended’.
  • In Mexico and North Dakota, law enforcement agencies used bot attacks on the hashtags used by activists on Twitter in order to to undermine protest and dissent.
  • In China, the social credit network platform Sesame assigns scores to Chinese citizens for their ‘trustworthiness’. This is based on their personal data, including what they purchase online and what they post on social media. Their Sesame score will also be affected by who their friends are; their score can be dragged down if their friends are ‘performing’ poorly on Sesame. The score is then used, not just as a credit score, but also to determine if they are entitled to social services, and by employers and landlords to assess whether they are a suitable employee or tenant.  From 2020, the Chinese government plans to enrol all Chinese citizens in a database that is likely to include information collected using similar methods to Sesame.

With little information about how monitoring tools are used, we cannot effectively assess the risks of discriminatory practices and targeting of minorities.

If people’s social media interactions are monitored by an endless list of external entities, even with the consent of the user, this could result in reduced social media interaction due to the chilling effect of such surveillance. People will curate their social interactions to manipulate the system, rather than expressing themselves freely. It could also result in a system of social and political control where people are expected to behave in a compliant manner because their social, economic and government records will be based upon that behaviour.

We believe that it is wrong to think that, because social media data can be accessed by non-validated contacts, this somehow makes data ‘fair game’. We note the European Court of Human Rights have long held that “there is […] a zone of interaction of a person with others, even in a public context, which may fall within the scope of “private life.” [1]

We urgently need a public discussion about the rights of law enforcement agencies, government and companies to monitor us and make life changing decisions based on our social media posts. The first step is identifying social media intelligence as an issue in and of its own right.

Our social media interactions should not be considered totally public and without limits for law enforcement agencies, intelligence agencies and insurance companies. Instead we need strong regulation to ensure that our social interactions – whether having a coffee with a close friend, or an update to 500 friends on Facebook – remain a private matter.

[1] [Peck v. the United Kingdom, no. 44647/98, § 57, ECHR 2003‑I; Perry v. the United Kingdom, no. 63737/00, § 36, ECHR 2003‑IX (extracts); and Köpke v. Germany (dec), no. 420/07, 5 October 2010).]

About the contributor

Tom started out writing and editing for newspapers, consultancies and think tanks on topics including politics and corruption in sub-Saharan Africa and Asia, then moved into designing and managing election-related projects in countries including Myanmar, Bangladesh, Rwanda and Bolivia. After getting interested in what data and technology could add in those areas and elsewhere, he made a beeline for The Engine Room. Tom is trying to read all of the Internet, but mostly spends his time picking out useful resources and trends for organisations using technology in their work.

See Tom's Articles

Related /

/ December 15, 2022

Building an embodied approach to data – an interview with Anja Kovacs

/ October 11, 2022

Exploring fundamental flaws behind digital consent  – an interview with Georgia Bullen

/ June 1, 2022

Seven Essential Questions for Ethical War Crimes Documentation