Submit to Digest

The Gentle Civilizer of Technology

Notes

Ravi Naik is the Director and Solicitor of AWO Legal. This is a shortened version of a speech given by the author at the Harvard Journal of Law and Technology on 29 October 2019.


Introduction: Constitutions and Technology

For constitutional lawyers, it could not be a more exciting time to be in the United States given the tumultuous political and constitutional issues unfolding. We are able to watch in real time as the tectonic plates of the democratic process shift while remaining rooted in the solid and stable core of a written constitution.

This sits in contrast to the approach in the United Kingdom, where a fluid and unwritten constitution means that even misleading the Queen is not enough to lead the position of Prime Minister to be scrutinised. Yet the United Kingdom’s Supreme Court showed the power of that unwritten constitution in a recent historic ruling. And the courts did so by exercising a power going back centuries.

In 1612, Francis Bacon wrote of judges, “Let them be lions, but yet lions under the throne; being circumspect that they do not check or oppose any points of sovereignty.” Writing at the time of the development of the British system of administrative law through judicial review, Bacon viewed the balance of constitutional power as favouring deference to the sovereign monarch. At the same time, a progressive judiciary sought to assert rights over the sovereign. In particular, Bacon’s great adversary Sir Edward Coke, in the Case of Proclamations, declared the power of the sovereign to be checked and limited by the judiciary: “The King hath no prerogative but that which the law of the land allows him.”

What does this all have to do with data and privacy? Most people understand privacy as being about issues like those that arose in the Equifax breach case, where the pleadings showed that “Equifax employed the username ‘admin’ and the password “admin” to protect a portal used to manage credit disputes, a password that ‘is a surefire way to get hacked.’” So, what do issues of human rights, the Constitution, and the 17th-century travails of Bacon and Coke have to do with Facebook, Cambridge Analytica, and digital technology?

The answer lies in the ability of constitutional principles to bear on data protection work and data rights — the civil liberties of modern technology. What is it about the data rights regime that allows human rights principles to apply? Human rights is inherent to the definition of data: personal information. And if knowledge is power, information is its source. Thus, today, big data has resulted in power. The dynamics of that power over data — what power knows and what power intends to do with what it knows — is a defining issue of our time. Data harvesting and misuse cuts into a society. It tears and fissures trust, fairness, and, ultimately, the delicate structures of our democracy. The power one has against these abuses relates to how confident our civil society is and how much it values an individual’s position in society.

In Europe, tools have been developed to get to grips with what tech is doing to the most basic aspects of society. This regime is grounded in the General Data Protection Regulation (GDPR), which provides a constitutional bulwark against the asymmetry of power over data. The Regulation seeks to enshrine personal autonomy and dignity by providing individuals with a class of rights over personal data. Those rights promise to revolutionize individuals’ relationship with those who seek to exploit that data, whether for profit or otherwise.

The EU is only now beginning to see the impact of the GDPR. It is not entirely clear how it will develop: will it, for example, be able to curb the excesses of power or revolutionize fundamental rights? Can those laws keep pace with developing tech?

Historical Background

In the first decade after the Internet as it exists today was switched on in January 1983, cyberspace was a brave new world – a glorious sandpit for utopians and computer science researchers. There was, in that magical virtual world, no crime, no spam, no commercial activity, and little concern about security. It was, in a way, a kind of wonderland, and it gave rise to the techno-utopianism embodied in John Perry Barlow’s “Declaration of the Independence of Cyberspace.” That Declaration encapsulated a libertarian ideal of cyberspace and, importantly, data. This space was, as these pioneers saw it, unregulated, free and lawless. Indeed, the Declaration was written in response to the US Communications Decency Act coming into law in 1996.

The continued importance of these voices from the past cannot be overstated, for it is these concepts and ideals, the so-called California ideology, that guide and inspire the data giants of today — Google, Apple, Facebook, and many more.

However, at the same time that these libertarian idealists were drafting declarations and concepts, international regulatory frameworks were being developed. In 1980, the Organization for Economic Cooperation and Development (OECD) issued its Guidelines Governing the Protection of Privacy and Trans-Border Flows of Personal Data. The seven principles governing the OECD’s recommendations for protection of personal data included giving data subjects notice when their data is being collected and prohibiting disclosure of data without the data subject’s consent. Moreover, data subjects would be informed as to who is collecting their data, allowed to access their data and make corrections to any inaccurate data, and have a method available to hold data collectors accountable.

The United States, while endorsing the OECD's recommendations, did not implement them. Today, much is made about the lawlessness of the internet in the United States, but the reality is that the United States was far ahead of most countries in legislating in this space. In 1973, for example, the HEW (health, Education, Welfare) Advisory Committee on Automated Data Systems published a Code of Fair Information Practice. Many of the principles now familiar in the GDPR were written in that Code. Had those principles actually been built in the United States legislative framework at the time, no one knows where we would be today instead.

Nevertheless, similar laws began to develop in Europe. All seven OECD principles were incorporated into the EU Data Protection Directive of 1995 (a matter of months before the declaration by Mr. Barlow). Those principles, including the data protection principles, were also, more or less, incorporated into the 1998 Data Protection Act in the UK. Coupled with developing tort frameworks, these policies gave rise to a familiar framework for data protection in the UK: a sort of charter of digital rights. Indeed, when announcing the recent Data Protection Bill in 2018, Matthew Hancock, the minister introducing the Bill, stated that the Bill would be “based on liberal and not libertarian values.”

Case Studies

Three cases are illustrative of both how that framework operates in practice and the related issues that can arise in data protection claims.

  • Cambridge Analytica: Political Profiling, Precedents, and Jurisdiction
  • Advertising Technology

Introduction

Professor David Carroll is perhaps most well-known for his fight for accountability against Cambridge Analytica. That fight has been subject to a shift in understanding, moving from the initial, limited publicity around the case, to more intricate profiles of Professor Carrol and the case, through to a full Netflix documentary, The Great Hack.

It is important to begin with an understanding of the oft-misunderstood purpose of the company. CEO Alexander Nix has outlined what the company does in a sales pitch, where he stated that the company was able to gather four to five thousand data points on each adult in the US. This data was said to have the potential to make a difference in seismic political events ranging from Brexit to the election of President Trump.

Of course, the use of data in politics is not new. Indeed, some cyber-libertarians were integral to political campaigns of American presidential candidates such as Bush, Gore, and, famously, Obama. But the use of data by Cambridge Analytica and in the 2016 election was qualitatively different both due to the quality and quantity of the data involved. Indeed, the 2020 election will be even more different, precisely because the quantity and quality of personal data has increased greatly. Individuals spend much more time online, generating increasing pools of personal data that siphon off information about us at every moment of our daily lives.

It was an interest in that use of personal data that led Professor Carroll (and others) to instruct me to (1) find out what Cambridge Analytica was doing and (2) challenge the legality of the company’s practices.

Subject access request: The “right to know” in action

Professor Carroll had filed a subject access request utilising the individual “right to know” against the company, discovering the company possessed an intricate and personal model of his political beliefs in a spreadsheet. Based on the spreadsheet, two things were clear. Firstly, the data set was not complete, as the rankings were provided without detail or explanation despite the presence of certain significant metrics in the data set, such as “gun rights importance.” Second, it was self-evident that the company was profiling people on the basis of their political beliefs, an activity that could not be lawful. The European data protection regime is rooted in a hierarchy of data, where some categories of information are given extra protection. A characteristic particularly protected by the regime is “political opinion” data.

Political profiling: Legal limits

To process sensitive personal data, or special category data as it is now called under the GDPR, the regulatory framework revolves around the idea of consent. There is a high threshold of proof to show a data controller had received consent to the processing of such data. There is a public interest exception to this requirement, but it is hard to see how a court would balance the public interest in the context of processing sensitive personal data, where, in the case of Cambridge Analytica, that data is processed purely for profit.

In the UK, there is limited case law on the use of political opinion data. For instance, in the case of first impression in Butt v. SSHD, the courts found that the protections on special category data — political opinions — could not apply, as the data had been manifestly made public by the plaintiff himself. More specifically, the complainant had already chosen to put information into the public domain and, further had in fact chosen a career involving putting his political opinions into the public domain.

What happens, however, where someone does have a public social media account but does not seek to put their political profile on such a public footing? Many people are likely to have social media accounts through which they may have put things out in public about, for example, the American impeachment proceedings or Brexit. At what point does that information leak of sensitive personal data deserving of protection turn into public information not deserving of protection? And where algorithms become increasingly sophisticated, able to take anonymous data and re-purpose it, how can the law properly protect such information? For example, if someone buys a New York Times membership using a credit card, that data is available and often sold. An individual person will be anonymised, but that data can and is often re-purposed using algorithms to tie the anonymised data to other data, such as a Facebook account and a voting record to create profiles. Does that information lose its quality because there are some public elements to it? These are all questions that courts will no doubt have to grapple with in due course.

Litigating against Cambridge Analytica

In the case against Cambridge Analytica, we made these arguments. Processing an individual’s political opinions from derived and pseudonymous data without consent was not lawful. In order to determine the extent of the unlawful processing, we issued a claim for full disclosure following the unsatisfactory response to the subject access request, and we coupled that discovery request with a claim for unlawful processing. We also filed a complaint before the British regulator of data, the Information Commissioner’s Office.

The claim was issued on 17 March 2018, the day before the Chris Wylie and Facebook revelations.

So, what happened? For reasons best known to the company, its initial defence was premised on jurisdiction. It claimed that, because Professor Carroll was based outside of the jurisdiction, he was not entitled to the protections of the data protection regime. The company would soon learn that it was very wrong about this issue of jurisdiction.

Jurisdiction 1: The reach of data protection

Jurisdiction is a concept used to tie states to their ability to regulate conduct or the consequences of events on their territory. This notion derives from a state’s sovereign right to establish legal relationships over entities and interests subject to its control. Thus, if a state has no jurisdiction over an individual, it has no authority to subject that individual to its laws. A state’s jurisdiction is thus tied principally to its territory. States are said to be the sole arbitrator of how to regulate conduct, and a court of one state may not judge conduct occurring in another state without consent. Consequently, the extension of jurisdiction into another state’s territory is a breach of that state’s sovereignty.

How does this jurisdictional framework apply to data, which can flow across states instantaneously and without restriction? Section 5 of the British Data Protection Act (DPA) provides one extended jurisdictional framework by applying to “any data if the data controller is established in the United Kingdom and the data are processed in the context of that establishment.” The GDPR takes matters further, with Article 3 extending jurisdiction to “the processing of personal data in the context of the activities of an establishment of a controller or a processor in the Union, regardless of whether the processing takes place in the Union or not.” Both regimes turn the concept of jurisdiction on its head by using broad and sweeping jurisdictional clauses that are not tied principally to the territory of the state, but rather to the sovereignty of the data processing. It is thus irrelevant whether the data subject is abroad — all that matters is where the company is established and where the data processing occurs.

Cambridge Analytica: Outcomes and accountability

Cambridge Analytica, in its pleadings, sought to refute this position, in contrast, and claimed that it would be “territorially extravagant” for Professor Carroll to have jurisdiction over his data. Based on this belief, it informed the ICO — the regulator of its very business — that, as an American, Professor Carroll had no more rights to his data “than a member of the Taliban sitting in a cave in the remotest corner of Afghanistan.”

The ICO disagreed. In May, the regulator issued an Enforcement Notice order, directing the firm to give Carroll his data once and for all. Failure to comply within thirty days, it warned, would result in criminal charges, but the company never complied. Cambridge Analytica was charged with the criminal offence of breaching an Enforcement Notice and pleaded guilty in January 2019. The criminal fine was ultimately measly, but the ICO’s charges were meaningful because they clearly underscored the fact that people outside the UK had these rights to their own data to begin with. "This prosecution, the first against Cambridge Analytica, is a warning that there are consequences for ignoring the law,” the information commissioner, Elizabeth Denham, said in a statement following the hearing: “Wherever you live in the world, if your data is being processed by a UK company, UK data protection laws apply.” Furthermore, the ICO had confirmed that the company had acted unlawfully in processing the data and thus created a precedent of enormous value that supports a continued fight for access to the data.

Jurisdiction 2: The ambitious reach of data protection and other European laws against modern technology

It is important, of course, to note that the “if” in Ms. Denham’s statement was doing a lot of heavy lifting. The ICO’s jurisdictional position was not, fundamentally, a surprising one at all: in other words, if you commit a crime in our country, you are a criminal in our country. In the realm of surprising jurisdictional positions, instead, are the genuinely extravagant cases that have recently come before the European Court of Justice, the ultimate arbitrators of laws in the European Union.

Two very recent cases of worldwide effect demonstrate this trend:

  • Google v. CNIL — In this case between Google and the French data protection regulator about the “right to be forgotten,” the ECJ ruled that France’s order demanding that Google remove certain offending search results worldwide was impermissible, at least as initially conceived. Most reacted to the decision by suggesting that the court had limited the reach of the data protection regime. But the reality is a little starker: the court said at para 72 that although “EU law does not currently require that the de-referencing granted concern all versions of the search engine in question, it also does not prohibit such a practice.” The remaining potential is tantalising for data protection lawyers because it leaves the door wide open for future extraterritorial regulations of the Internet. Indeed, on the same week as Google v. CNIL, the court handed down judgment in a case that allowed regulations to walk straight through that open door.
  • Eva Glawischnig-Piesczek v. Facebook Ireland Limited — The court decided a case that pitted Facebook against an Austrian politician who requested that Facebook remove a Facebook user’s disparaging public posts about the politician. Facebook declined to remove the offending posts, and the Vienna Commercial Court issued an injunction against the company that required removal of not only the offending posts, but also “identical” and “equivalent” posts internationally. Facebook responded by removing only the original post for users located in Austria only. The Austrian Supreme Court then referred the matter to the CJEU, where the court held that Article 15 of the EU’s e-commerce directive (a law that bears hallmarks of Section 230 of the US Communications Decency Act) does not prohibit EU states from ordering extremely broad injunctions against platforms like Facebook to take down offending material. The court held that these injunctions can cover a wide array of material — not just Facebook posts but also reposts and “equivalent” posts — and apply worldwide. Under this interpretation, however, a problem arises when sovereign interests collide. This is most obvious in the context of the tension between European states and the United States. What happens when a European take-down notice runs up against the First Amendment? What happens when a French court demands that Facebook take down content by way of an order that is permissible in France but impermissible in the United States? Will Facebook insist that the French court order does not apply to Facebook’s product in the United States, or will the French court will listen to and accommodate Facebook’s concerns? Or perhaps the courts will insist that the interests of protecting the offended party in France outweigh free speech concerns in the United States? After all, data rights are fundamental rights in the EU. And what happens if Facebook then refers this matter to the courts of superior record in the United States?

These, and similar, rulings signal that the world is heading toward international legal confrontations, and perhaps diplomatic confrontations in turn, over data protection regimes. The jurisdictional battles are only just heating up.

Introduction

Data protection frameworks seek to give effect to privacy and power dynamics in the modern age. These rights amount to data rights, which require analogy and analysis in the same way other rights do. That desire to provide a constitutional framework over data misuse premised on a conception of fundamental human rights is detailed within the data protection regulations. The GDPR, for example, mentions fundamental rights thirty-three times. The very first recital opens:

The protection of natural persons in relation to the processing of personal data is a fundamental right.

Why are these fundamental rights of wider importance? Because data protection provides a rights regime to protect individuals’ information vis-à-vis any authority, whether private or public. While society has been used to a constitutional settlement between the state and its citizens — a vertical power axis — the data protection regime applies on a horizontal footing between two private actors. This is a marked shift in our social contract that is likely to have widespread consequences.

This constitutional reach can be seen almost daily. For instance, Facebook agreed to overhaul targeted advertising for jobs, housing, and financial loans after the ACLU, and other American civil liberties groups, threatened a lawsuit highlighting systematic discrimination. These campaigns are likely to grow, as civil society pivots towards the new power of technology.

Behavioural advertising: the case

One example of this trend is the leading regulatory complaint concerning the behavioural advertising technology industry — a case that is likely to have widespread ramifications as it looks at the backbone of the Internet, namely advertising. There are two core issues about behavioural advertising that give rise to concerns about both the data involved and the scale of such data:

  • The data — Personal data is involved in bidding for targeting for behavioural advertising: the economy of people’s attention. This includes “categories of content”; for example, as shown in evidence we have filed before European regulators, the categories of information in a bid request include whether an individual is looking at “incest” and “abuse support.” Google’s own evidence confirms that it marks material under the categories "Native Americans,” "Lesbian, Gay, Bisexual & Transgender,” "left-wing politics," and so on. Collecting such deeply personal information is extremely disturbing.
  • Scale – Estimated daily bid requests in advertising all run into the billions. Profoundly personal information, at that vast speed and scale, is consistently floating around the Internet, without any clear knowledge of who is hoarding it or how it is being used. Moreover, Cambridge Analytica was only one of the many hundreds of companies involved in this ecosystem. The true extent of this system is almost impossible to monitor, so the complaints that I am instructed on before the European regulators are focused on the security of such information, as one of the key concepts of the data protection regime is security. But if personal data is broadcast at such speed and such scale, can it ever be secure?

These are just a few examples of the many cases that necessitate increased protection of fundamental human rights around data. Indeed, the entirety of my own caseload concerns either tech or data misuse that heavily implicate constitutional principles. And there are sure to be many other lawyers like me. 2019 is likely to be marked by data protection troubles and the constitutional impacts of big data, so big data is very much in the regulatory spotlight for 2020.

The Gaps and the Future

Introduction: Current gaps and solutions

But there are limits to the new regime. For instance, the right to know must necessarily be ex post facto: by the time you know, the damage could be done. Currently, the GDPR has not provided an ex ante right to know. Moreover, inherent in that right is a further question — what does the right to know look like in practice? Do we have a right to reasonable inferences about how data is and has been used, rather than just the core information? This is subject to much discussion in Europe, with the expanded view gaining some serious traction that demonstrates potential to develop into a real and tangible right.

In the immediate future, more glaringly, lies another question: Is the legal framework that we have capable of dealing with the future? How can the law respond to developing technology? Artificial intelligence and related technology will be the most important agents of change in the 21st century, and these developments will transform our economy, our culture, our politics, and more. Indeed, technology has already transformed most of these matters. Yet the technology that has previously shifted our economic and our political reality is rudimentary compared to what may come.

Critically, when technology becomes able to deliberate, reason, and act outside and beyond the instruction of a principal actor, how are we to regulate that behaviour? As decisions become increasingly automated, debate is required to ensure that we face these questions head on. Here is just one potential road map to some solutions:

  • Regulation at inception
  • Developing regulations to meet future challenges

First, a solution must address the regulatory issues involved in the initial development and deployment of technology. Those who design our built environment do so cognizant of ethical implications as laid down in law, and the architects of our digital environment may need a similar framework. As Paul Nemitz, principal adviser to the European Commission, explains, tech companies “will have to think from the outset . . . about how their future program could affect democracy, fundamental rights and the rule of law and how to ensure that the program does not undermine or disregard . . . these basic tenets of constitutional democracy.”

Second, in addition to implementing controls at the outset of new developments, technology needs to be kept under review. Regulating technology only at the point of inception will never be enough. The pace of change, the lack of control over why decisions are made, and the prospect of autonomous machine creation all demand a further response. Furthermore, relying on self-regulation has also proven hollow.

Those at the forefront of regulating technology are already grappling with such deficiencies in the law. As David Vladeck, a former Director of the Bureau of Consumer Protections at the US Federal Trade Commission has pointed out:

[T]he law is not necessarily equipped to address the legal issues that will start to arise when the inevitable occurs and these machines cause injury but there is no ‘principal’ directing the actions of the machine. How the law chooses to treat machines without principals will be the central legal question that accompanies the introduction of truly autonomous machines and at some point, the law will need to have an answer to that question.

One radical answer that is beginning to take shape amongst European regulators, academics, and lawyers is the creation of a new form of legal personality. From the nation-state to the corporation, the law has historically been able and ready to respond to the shifting focus of our eras by redefining personhood. Legal persons are, as famously articulated by the US Supreme Court, “invisible, intangible, and existing only in contemplation of law.” Yet the state and the company are as real and important to our lives as anything physical.

The call for legal personality for artificial intelligence is not new. Indeed, theories around imputing a legal personality on technology have been around from as early as 1992. However, the increasing pace of technological advances has brought such discussions to the fore. Indeed, the European Union, in a 2017 resolution on Civil Law Rules on Robotics, suggested a “specific legal status” for advanced technologies.

These challenges all that present themselves in contemporary legal issues. The current data protection regime is limited by a concept of “data controllers” and “data subjects.” So what happens when there is no easily identifiable controller? Or where the tech giants put the machine at arm’s reach from control in order to avoid accountability?

This is not an abstract notion. Practitioners like me are frequently faced with legal façades. Indeed, in the Google Spain case, more often known as the right to be forgotten case, Google argued before the European Court of Justice that Google Search is an automated decision-making tool over which it had no control. The algorithmic decision-making was independent and should therefore shield the principal from results by their agent. The CJEU thought otherwise, finding that search was to Google as the monster was to Dr. Frankenstein, where the creators would both be liable for the results of their creations. Unlike Dr. Frankenstein and his unique creation, Google has numerous products of ever-increasing sophistication and ability. And Google is just one player in the race to develop tech.

Conclusion

As we enter a new era of non-organic life evolving by intelligent design, the contours of our existing legal regimes are being reconsidered. The very foundations of our legal settlement may not be able to stand up to the problems presented by future technology. Examples cited by those considering this future include issues of causations and negligence. Causation melts away when intention cannot be attributed, and how is negligence to be found when the “reasonableness” of a machine’s actions cannot be measured? How are we to judge decisions when they are cut loose from any human input, or where decisions are made without a principal over those decisions at all?

These questions are being debated in Europe as pressing issues for the ages. Society has responded to such questions before. The international legal framework developed to the rising power of the nation-state, for example, by carefully balancing the rights and responsibilities of nations. National personhood is unlike that for natural persons and, like other legal fictions, carries with it a distinct sense of personality. International law developed a body of rules for dealing with these entities, from the creation of states to managing diplomatic relations. A court has been established to arbitrate between entities. Martti Koskenniemi described the development of that international legal order as having developed with a view to becoming the “gentle civilizer of nations.”

The inescapable reality of our future is that technology will continue to develop in ways we cannot envisage. If information is power, we are ceding the control of information to an ever increasing degree to non-human entities, all while the data in the Internet of Things has the ability to redefine human existence in ways yet to be understood. The law must develop at equal pace to ensure it remains relevant and able to act as the gentle civilizer of technology.