By Fiona de Londras, University of Birmingham
The release of yet more of Edward Snowden’s leaked files reveals the still-astonishing scale and breadth of government surveillance after more than a year of revelations. These recent papers revealed to The Intercept website discuss a programme within Britain’s GCHQ known as “Karma Police”, in which the intelligence agency gathered more than 1.1 trillion pieces of information on UK citizens between August 2007 and March 2009.
Spurred on by the expansion of intercept warrants under the Terrorism Act 2006, this information is users’ internet metadata – details of phone calls, email messages and browser connections that includes passwords, contacts, phone numbers, email addresses, and folders used to organise emails, but not the actual content of messages or emails.
Metadata can help identify people of interest, build profiles, and assist with decisions to start or escalate surveillance of individuals. All this information can be collected often at a fraction of the cost of doing this through traditional methods. In other words, metadata is not insignificant – and this is precisely why governments are so committed to collecting and processing it. However, bulk metadata collection – where information is collected from everyone whether a “person of interest” or not – is rightly a source of deep anxiety from both security and human rights perspectives.
Does it make us safer?
It’s not at all clear that bulk collection of metadata makes society “safer”. While such data may be (and often is) useful in investigating crime, its use in helping anticipate terrorist incidents is hotly disputed. This doesn’t mean that security services cannot point to situations in which they have disrupted possible attacks through information gleaned from such activities, but it does mean that the justification for bulk, rather than targeted, data collection has not been made.
In fact the question as to whether bulk collection of metadata, and the sheer volume of data it generates, may actually hinder effective identification of terrorist suspects and other serious crime hasn’t been satisfactorily addressed. Nor is it clear whether a bulk data collection approach, rather than one that emphasises directing more resources towards traditional police and human intelligence techniques, is any more efficient, especially in the domestic sphere where GCHQ and MI5 share information.
Bulk metadata collection relies on algorithmic computer analysis followed by human judgement. It’s not clear how much is missed, or how many false positives are generated, by adopting a machine-centric approach over one that involves more human experience. Furthermore, we don’t know to what extent resources are diverted away from funding the human expertise of policing, relationship building and developing professional investigative instincts as a result of such programmes of data collection and processing.
At the very least, intrusive operations such as this one should work; if they are to be accepted at all, they should be justified by their effectiveness, something that remains more a matter of rhetoric than of established fact.
Is it justified?
Even if bulk data collection does work, is this justification in itself? The answer, it seems to me, must be determined by oversight and the importance we ascribe to our civil rights.
No agency that collects and processes this volume of information should be without effective oversight. Yet we have seen that, despite GCHQ being subject to oversight from the Intelligence Services Commissioner for its intelligence function, and to the secretary of state for foreign and commonwealth affairs for its overseas-related work, its culture of secrecy and non-disclosure nevertheless means that we’re left to rely on leaks and whistleblowers to get a clear picture of what is happening. The Wikileaks documents demonstrate this all too well, showing internal discussions that reveal GCHQ bosses felt their oversight bodies were “on side”.
This is a matter of real concern from a human rights and civil liberties perspective. The growth of bulk data collection and computerised processing by government agencies is fundamentally shifting political, operational and potentially even popular conceptions of what “privacy” really entails. What are we entitled to keep to ourselves? When are we entitled to expect the state to have to justify its intrusions into that private space? The fact that this is metadata rather than content doesn’t remove the privacy implications of such surveillance.
As the Court of Justice of the European Union said in April 2014, metadata “may allow very precise conclusions to be drawn concerning … private lives”. In order for any retention regime to be proportionate – and so stay within the requirements of human rights law – proper safeguards and limitations must be built in. Otherwise the effects of surveillance could be corrosive, creating a chilling sense that “one is being watched permanently”.
As GCHQ continues to amass data on internet users, it is time for political leaders to answer two vital questions: does bulk data collection work? And, if so, is it worth the cost?
Fiona de Londras, Professor of Global Legal Studies, Birmingham Law School, University of Birmingham
This article was originally published on The Conversation. Read the original article.