Issues Magazine

What Happened to Privacy? Really?

By Stephen Wilson

Powerful forces – some of them megatrends in their own right – are pulling privacy in different directions.

The cover of Newsweek on 27 July 1970 featured a cartoon couple cowered by computer and communications technology, with the urgent all-caps headline “IS PRIVACY DEAD?” Four decades on, Newsweek is dead but we’re still asking the same question.

Privacy is traditionally thought to be so central to the human condition that it’s enshrined in Article 12 of the United Nations’ Universal Declaration of Human Rights. Yet every generation or so, our notions of privacy are challenged by a new technology. In the 1880s it was photography and telegraphy; in the 1970s it was computing and consumer electronics. And now it’s the internet, a revolution that has virtually everyone connected to everyone else (and soon everything) everywhere – and all of the time. Some of the world’s biggest corporations now operate with just one asset – information – and a vigorous “publicness” movement rallies around the purported liberation of shedding what are said to be old fashioned inhibitions (Jarvis J., Public Parts, Simon & Schuster, 2011). Online social networking, e-health, crowdsourcing and new digital economies appear to have shifted some of our societal fundamentals.

However, the past decade has seen a dramatic expansion of countries legislating data protection laws in response to citizens’ insistence that their privacy is as precious as ever. And modern information technologies bring, through cryptography, the promise of absolute secrecy. Privacy has long stood in opposition to the march of technology: it is the classical immovable object met by an irresistible force.

So how robust is privacy? And will the latest technological revolution finally change privacy forever?

Soaking in Information

We live in a connected world. Young people today may have grown tired of hearing what a difference the internet has made, but a crucial question is whether relatively new networking technologies and sheer connectedness are exerting novel stresses to which social structures have yet to adapt. Francis Bacon was the first among many to have said “knowledge is power”. Thanks to the availability of information, individuals are probably more powerful today than at any time in history. Search, maps, Wikipedia, online social networks and 3G are taken for granted. Unlimited deep technical knowledge is available in chat rooms; universities are providing a full gamut of free training via massive open online courses (MOOCs). The internet empowers many to organise in ways that are unprecedented for political, social or business ends. Entirely new business models have emerged in the past decade, and there are indications that political models are changing too.

Most mainstream observers still tend to talk about the “digital” economy, but many think the time has come to drop the qualifier. Important services and products are, of course, becoming inherently digital, and whole business categories such as travel, newspapers, music, photography and video have been massively disrupted. In general, information is the lifeblood of most businesses. Every single conventional supply chain is increasingly digital. The Australian mercantile landscape has not yet adjusted to the reality that around 7% of all retail sales occur online, and the proportion continues to rise rapidly (www.pwc.com.au/industry/retail-consumer/assets/Digital-Media-Research-Ju...). The fortunes of countless technology billionaires have been made in industries that did not exist 20 or 30 years ago, and the only asset of some of these businesses is information.

Banks and payments systems are getting in on the action, innovating at a hectic pace to keep up with financial services development. There is a bewildering array of new alternative currencies like Linden dollars, Facebook credits and Bitcoins, all of which can be traded for “real” (Reserve Bank-backed) money in a number of exchanges of varying reputation. At one time it was possible for Entropia Universe gamers to withdraw dollars at ATMs against their virtual bank balances.

New ways to access finance have arisen, such as “peer-to-peer” lending and crowd funding. Several so-called direct banks in Australia exist without any branch infrastructure. Conventional financial institutions are under enormous pressure in the cyber environment and are responding with, among other things, new payment services inside online social networks and even virtual worlds. Banks, of course, are keen to not have too many sales conducted outside the traditional payments system, where they make their fees. Even more strategically, banks want to control not just the money but the way the money flows, because it has dawned on them that information about how people spend might be even more valuable than what they spend.

Privacy in an Open World

For many for us, on a personal level, real life is a dynamic blend of online and physical experiences. The distinction between digital relationships and flesh-and-blood ones seems increasingly arbitrary; in fact, we probably need new words to describe online and offline interactions more subtly, without implying a dichotomy.

Today’s privacy challenges are about more than digital technology: they really stem from the way the world has opened up. The enthusiasm of many for such openness – especially in online social networking – has been taken by some commentators as a sign of deep changes in privacy attitudes. Facebook founder and CEO Mark Zuckerberg, for instance, said in 2010: “People have really gotten comfortable not only sharing more information and different kinds, but more openly and with more people – and that social norm is just something that has evolved over time”. And yet serious academic investigation of the internet’s impact on society is (inevitably) still in its infancy. Social norms are constantly evolving, but it’s too early to tell if they have reached a new and more permissive steady state. The views of information magnates in this regard should be calibrated in light of their vested interest in “promiscuity”.

At some level, privacy is about being closed. And curiously for a fundamental human right, the desire to close off parts of our lives is relatively fresh. Arguably it’s even something of a “first world problem”. Formalised privacy appears to be an urban phenomenon, unknown as such in tribal and feudal societies where everyone knew everyone and their business. Only when large numbers of people congregated in cities did they become concerned with privacy. Then they felt the need to structure the way they related to large numbers of people – family, friends, workmates, merchants, professionals and strangers – in multi-layered relationships.

So privacy was born of the first industrial revolution. It has taken prosperity and active public interest to create the many mechanisms we take for granted today for routinely protecting our privacy. Consider the importance of the postal service, direct dial telephones and telecommunications regulations. And what would life be like without the luxury of individual bedrooms in large houses, cars in which we can escape for a while, and the mobile handset?

The earliest jurisprudence in privacy was by Samuel Warren and Louis Brandeis well over a century ago. Their 1890 article “The Right to Privacy” informed several pivotal US Supreme Court cases through the early 20th century, and even today still stands as a peak reference work. Warren and Brandeis were expressly responding to the controversial technologies of their day, namely photography, telegraphy and the daily newspaper.

In Control

Privacy is about respect and control. Simply put, if someone knows me then they should respect what they know. They should exercise restraint in how they use that knowledge, and be guided by my wishes. Generally, privacy is not about anonymity or secrecy. Of course, if one lives his or her life underground then absolute privacy can be achieved, yet most of us exist in diverse communities where we actually want others to know a great deal about us. We want merchants to know our shipping address and payment details, healthcare providers to know our intimate details, hotels to know our travel plans and so on. Practical privacy means that personal information is not shared arbitrarily, and that individuals retain control over the tracks of their lives.

In 1980 the OECD ratified a set of privacy principles that have since been enacted in more or less the same form as data protection statutes in more than 100 countries. Central to the OECD framework are the tenets that personal information should only be collected for a defined and agreed purpose, and that personal information collected for one purpose should not be used without consent for another. These principles are technology-neutral. Framed well before the advent of the internet, they have nevertheless been applied many times over to limit the activities of information companies, as we can see from two recent case studies.

Google’s Collection of Wi-Fi Data

Google’s powerful worldwide mapping services are layered on top of a global map of “indicia” points that are accessible to mobile devices and provide clues about location. Among the most important indicias are Wi-Fi routers. Google maintains a database of Wi-Fi locations, many of which are operated by private homes and businesses but are accessible from the street. The database is fed with data obtained automatically via Google’s fleet of StreetView cars as they engage in other mapping operations.

In 2010 a privacy problem emerged. It was found that the software in the StreetView cars was not only logging the anonymous Wi-Fi router identifiers but was also inadvertently recording short periods of Wi-Fi network data; that is, the information moving about on networks within range of the cars. Investigations by privacy regulators in Europe and Australia found that significant amounts of the information was personally identifiable, and collecting it without a good reason was in breach of privacy laws. Google accepted the findings, agreed to review its practices to prevent such over-collection, and promptly deleted all the extraneous Wi-Fi contents from its systems (none of the contents had been incorporated in the geolocation systems, and Google’s mapping activities have otherwise proceeded without needing any change).

Many technologists regarded the regulators’ findings as counter-intuitive, arguing that Wi-Fi data (if unencrypted) was in the “public domain” and therefore not “private”. Further, no harm was done by logging such data, as Google was not acting upon it in any way.

Yet these positions are simply peripheral to the letter of the law. The Privacy Act 1988 (Cwlth), for instance, is concerned with personally identifiable information regardless of where it comes from. And the Act does not embody a harms test. Our law basically does not distinguish public from private. In fact, the words “public” and “private” are not operable in the Act. The very collection without justification of identifiable data is a breach.

Facebook’s Collection of Biometric Matches

Facial recognition and automatic photo tagging are one of the great innovations in social media sites, enabling new ways to organise photo albums and to search visual content. After offering user-initiated tagging for some years, Facebook rolled out its automatic tag suggestion feature, which utilises advanced biometric processes. To generate suggestions, Facebook’s facial recognition algorithms run in the background over all photo albums. When they make a putative match and record a deduced name against a hitherto anonymous image, they are in effect collecting fresh personal information, albeit automatically.

OECD-style privacy regulations limit the collection of personal information as well as its use and disclosure. The large scale “mining” of photos uploaded to Facebook for personal use in order to create tag suggestions as a by-product was not something that users were generally aware of, much less in agreement with.

European privacy regulators in mid-2012 found biometric data collection without consent to be a serious breach of privacy laws. They forced Facebook to cease facial recognition and tag suggestions in the European Union and to delete all biometric data collected of European users. At the time of writing, Facebook continues to contest the ruling but it has been a major show of force over one of the most powerful companies of the digital age.

Big Data: Big Future

Biometric tag suggestions is but one example of a whole class of Big Data processes coming to the fore. Big Data refers to a world of powerful statistical tools that can bring order to vast raw data sets, extract patterns and knowledge out of the noise, and make predictions from it. These tools are being applied everywhere, from sifting telephone call records to spot crimes in the planning, to DNA and medical research. Every day, retailers use sophisticated data analytics to mine customer data, ostensibly to better uncover true buyer sentiments and continuously improve their offerings. Some department stores are interested in predicting such major life-changing events as moving house or falling pregnant, because then they can target whole categories of products to their loyal customers. But when Big Data processes like this start to synthesise and collect health information, the stakes are raised significantly.

Real-time Big Data will become embedded in our daily lives through several synchronous developments. First, computing power, storage capacity and high speed internet connectivity all continue to improve at exponential rates. Second, there are more and more “signals” for data miners to choose from. No longer do you have to consciously tell your online social network what you like or what you’re up to, because new “augmented reality” devices are automatically collecting audio, video and locational data, and using image analysis to automatically work out what you’re doing, when, where and with whom. And miniaturisation is leading to a whole range of “ubiquitous computing”: smart appliances, wearable computers and even smart clothes with built-in sensors and networking.

The privacy risks are obvious, and yet the benefits are huge. So how should we think about the balance in order to optimise the outcome? Let’s remember that information powers the new digital economy, and the business models of many major new brands like Facebook, Twitter, Four Square and Google incorporate a bargain for personal information. We obtain fantastic services from these businesses “for free” but in reality they are enabled by all that information we give out as we search, browse, like, friend, tag, tweet and buy. It’s said that if you’re not paying for something you may not be the customer so much as the product itself.

The more innovation we see ahead, the more certain it seems that data will be the core asset of cyber enterprises. To retain and even improve our privacy in the unfolding digital world we must be able to visualise the data flows that we’re engaged in, evaluate what we get in return for our information, and determine a reasonable trade of costs and benefits

Is privacy dead? If the same rhetorical question needs to be asked over and over for decades, then it’s likely the answer is no.

*See http://lockstep.com.au/blog.