Why does the value we place on our personal privacy suddenly go out the window when we head onto the internet and trade it for minor benefits?
The above question refers to a phenomenon in internet ethics known as 'the privacy paradox'. Philosophers, sociologists, and psychologists have been discussing for decades; it will continue to persist until we truly understand how our personal information is being used and what we're signing up for.
What is a paradox?
In logical form, All arguments are made up of premises (p), statements of fact that establish a conclusion (c). An example of this might be: (p1) the grass is only wet after it rains; (p2) the grass is wet; (c) therefore it rained. The conclusion follows logically from the premises.
A paradox, on the other hand, can generally be defined as a set of premises that both seem plausible in solidarity yet are mutually inconsistent, often producing a counterintuitive, contradictory or nonsensical conclusion. In the context of privacy, the paradox looks like this:
- P: Generally, humans value and prioritize their privacy to a great extent; people from all corners of the globe understand and experience the desire to be in control of their personal information.
- P: Generally, humans compromise their own privacy online on a daily basis, agreeing for huge amounts of personal information to be shared for relatively minor benefits.
- C: Humans both value and do not value their personal privacy.
This conclusion is unacceptable; you cannot value and not value something simultaneously. But still, it persists – both premises are seemingly true.
A closer look at the privacy paradox
In the non-virtual world we call real-life, humans are, despite being social creatures, extremely precious about their spheres of privacy. We have to get to know people for months – and sometimes years – before certain personal tales can be told or elements of our personality revealed. Once just a simple desire, it's now considered a human right in hundreds of countries.
As human values go, privacy isn't often expanded upon; the concept is valued in and of its self, rather than as a means to an end. The statement 'I like my privacy' doesn't really need a 'because' after it – we all understand exactly what this means: we just like our privacy.
The privacy paradox arises precisely at the moment this value collapses. We head online and, despite claiming to care about our privacy, we begin trading off information and important facts about our lives for minor benefits such as access to a website or online services. Our data is filtered through organizations we've heard of – the ones we've 'agreed' terms with – to organizations we haven't.
As knowledge of dodgy privacy practices has grown, instead of our real-world values beginning to align more closely with our behavior online, the paradox has been more deeply entrenched and pushed further towards its illogical extreme – we now know more than ever about how our data is being used, yet continue to give increasing amounts away.
What's the solution to the privacy paradox?
There are two main routes out of a paradox. One is simply to 'bite the bullet' and concede that there really is an unassailable contradiction at play. But you can also challenge one of the premises by contending that it is not quite what it seems, that there is a point where the paradox has misled us, and in reality, the contradiction is merely superficial.
Some possible explanations
If we can, avoiding the bullet is preferable – biting it wouldn't make for a very satisfying conclusion in this case. Luckily, there are a number of explanations about why our approach to online privacy doesn't square with our real-world conceptions. This isn't an exhaustive list of all of them, but the three selected illustrate a couple of different angles one can attack the problem from.
Is privacy on the internet just 'different?'
One proposed solution to the paradox is to deny that the word 'privacy' in premises 1 and 2 is truly referring to the same thing. In this line of thinking, all of our conceptions of privacy are static and exist in the context of real-world interactions in spaces we physically inhabit.
The internet, on the other hand, is an entirely non-physical space. It's clear that our conception of real-world privacy does not map neatly onto it. But this may be a sign that it doesn't at all, and a refusal to accept this brute fact may go some way to explaining why we've found it so difficult to enforce relevant regulation adequately. In this way, giving away your information online says very little about how much you value 'privacy'.
But the implications of this explanation seem unreasonable. Are we just supposed to accept an uncomfortable level of intrusion into our personal lives, or not use the internet at all? Besides, even if our prior understanding of privacy doesn't quite fit, that doesn't necessarily mean that no privacy code or standard will.
Is privacy is now a commodity?
A related solution is to suggest that our privacy in an internet context has been commodified in a way it is not in the real world. We value our privacy, but conceptualize it online only as a commodity – which has happened quite gradually and subtly over the last two decades – and importantly, less so than access to commodities like social media, or even a Big Mac.
Gordon Hull thinks this due to the dominance of neoliberal techniques of power we've all unintentionally swallowed. He contends that we have grown accustomed to treating privacy as part of an economic transaction, trading it out for some commodity and thus reducing it to the level of one. We have been taught, he claims, that matters of privacy are about 'individual risk management' – absolving other entities of responsibility – and thus can function as a 'technology of neoliberal governance'.
This relates somewhat to the idea that virtual objects – and data for that matter – are not things we subsume ownership of in the way we do physical objects, adding to this idea that privacy can be swapped and traded online in a way we wouldn't with a real-world analog.
Is privacy just not a priority?
Another explanation involves other conflicting desires – is there just something we value or desire more than privacy, derived from the internet, that is skewing our decision making? It's plausible. We weigh desires and rights against each other all the time when we make decisions – privacy taking primacy in the real world is not a guarantor of its importance in a different arena.
Following this line of reasoning, one potentially 'competing' desire can be found in the writings of early 19th century German philosopher Hegel. He contends that the story of humanity has been defined by the individual struggle for recognition, for others to recognize us. For Hegel, the basic desire to be recognized by others is essential for generating our social existence and place in society. Without recognition, self-respect, self-esteem and self-worth cannot be acquired.
The internet has ushered in both a rise in government and corporate surveillance, but also the unprecedented rise of self-documentation and the uploading of our own personal information to social media. Viewed through a Hegelian lens, this has arisen because there is an infinitely large pool of people willing to 'recognize' us. If this is the psychological mechanism at play, which is plausible considering the way social networking has developed, users may see this as a worthwhile trade-off of sorts.
The problem with this explanation is this – is this only considered an equally or heavily weighted desire because our conception of online privacy, and where our information is sent, is so out of touch? Would the pros and cons, explained in their totalities, make the desire for recognition much less pertinent, or motivate us to find it elsewhere? Is it a false trade-off in the first place? These are all open questions.
What does the paradox say about us?
All of these 'solutions' to the paradox uncover something different about our relationship with our own privacy and the internet, but they're all interconnected.
The process which has led us to become totally dependent on the internet for society to function was rapid to say the least – this gave us a very small 'translation window' to apply, modify, or update our understanding of privacy in real-time ready for a new communication landscape.
The winners in this transition were companies like Amazon and Facebook – their rapid rise to dominance set the terms of the trade-off and, in turn, asked us to consider our privacy in a different way: "you can't use our services, the cornerstones of 21st-century life, unless you compromise it."
This process has removed us from the reasons why and the context within which we value privacy in non-virtual spaces, so much so that it doesn't even 'feel' like we're compromising our own privacy when using these sorts of platforms.
It's arguable some of this is unavoidable – social media sites, for example, need some of our personal information to function at all. Yet their notion of notice and consent have made it much easier to use it for reasons unintended by users whilst maintaining that it's their fault.
When you add reasons to compromise your privacy that have some other 'benefit' – anything from simply uploading pictures of yourself at a party for your hundreds of followers to see to viewing an article you really want to read – the waters are further muddied.
Will regulation make a difference?
The EU's GDPR laws – and corresponding fines for breaking them – were intended to make companies stewards of their customers' privacy. According to the EU, they seem to be having a positive effect.
Subjecting the companies that hold and use our data to stringent regulation is an obvious way to address the state of play, as well as protect users; but regulation, for the most part, is unlikely to change how much we value our online privacy.
Conversely, however, how little users value their online privacy already seems to have affected the efficacy of regulation. One of the immediate repercussions from GDPR for consumers was 'consent fatigue' – privacy notices became longer, denser, and more widespread, but were generally left unread as people were asked to consent to more ways their data could be used.
The lack of value placed on these transactions, in this case, will have aided companies who want to smuggle questionable clauses into their privacy agreements and ensure that the burden of responsibility can be continuously shifted onto users.
Bridging the knowledge gap
The one, inarguable, truth that none of the paradox solutions, nor the paradox itself, attempts to object to, is that humans value their privacy. What we haven't been able to pinpoint is exactly why that value dissolves.
I think the gap that needs to be bridged here is, broadly, a gap in knowledge. Knowledge is a necessary prerequisite of determining value; the more you know about something, the more accurately its value can be determined. Although Knowledge isn't the only thing needed to motivate a change in behavior but – it's certainly a critical one in this instance – particularly as a change in values needs to occur. I also take solace in the fact that more apathetic attitudes to online privacy have highly-valued, real-world analogs that require very little to materialize.
In the real world, our privacy scruples are quite finely tuned. If someone snatches your phone off you in the street, you will likely experience the immediate physical effects of a threat to your privacy because you can see and feel the threat and you know bad things happen when someone steals your personal information. You'd have the same kind of reaction if confidential medical files, an address book, or a diary was snatched – you value the private nature of that information, and the things you value most deeply often culminate in the most vivid emotional responses when threatened.
Conversely, most people still do not know what it means when a social media company or website asks them to agree to a privacy notice and where their data goes. They cannot see or feel that threat to their privacy, so we don't impart the same value to the transaction – however, I think it would be severe enough to change behavior if individuals the threat was truly understood and processed. I also think forging this understand would help us claw back privacy as a right rather than a commodity.
John Naughton, writing in the Guardian two years ago, questions whether this knowledge does actually change people's behavior in this specific context. In studies on this topic, it's not uncommon to find a cohort of subjects apathetic to how their data is used, even when it is explained to them.
But the example he cites consists of telling people what social categories Facebook knows they fit into and seeing how they react. But I am not convinced that this is the correct kind of knowledge needed to form the genesis of a meaningful behavior and value-shift. To clarify, I concede that most people know and understand what 'we only share your data with trusted third parties' means on the surface (that a site shares data with other sites), but I don't think most users really do understand what a 'third party' can be, where their information eventually ends up and how this helps build their advertising profile.
Users understanding yet simply not caring doesn't explain why Google, Facebook, and Amazon consistently top consumer trust surveys. Apathy towards a company doesn't lead to (misplaced) trust; a lack of true understanding, on the other hand, definitely can.
Digital literacy: the only way out?
How do we make such a shift happen? The only feasible, long-term solution I can see at present is a drastic re-imagining of the content, structure, and importance of digital literacy as a curriculum subject – and in many cases, this will need to be done from scratch. Such a subject would cover issues from data privacy to disinformation, equipping young people with the tools and knowledge they need to traverse the complex internet landscape we are all increasingly dependent on.
Yet it seems many governments are yet to recognize the pressing nature of this issue – or worse, are just ignoring it. The idea, unfortunately, isn't new – the UK government, for example, rejected calls from the DCMS select committee to make digital literacy a core pillar of education just a couple of years ago. If any government refused to implement a sufficiently extensive literacy curriculum – and the society it governed was starting to see the societal effects of people not be being able to read – I think their citizens would be up in arms. But the connection is yet to be made.
I'm not suggesting that ramping up digital literacy in education settings will suddenly make millions quit social media en masse, nor that it would be a good thing if they did. Social media companies can continue to produce fuzzy privacy notices and sell data through the various avenues that lawmakers don't fully understand yet, but this will be much harder if people truly start to value their online privacy as if it no different to their real-world privacy.
Information resources and privacy-conscious alternatives
A change of this sort would, of course, benefit from support from information resources like ProPrivacy, which reach people education systems can't and are currently flying the digital literacy flag whilst many governments still refuse to.
As we covered, knowledge is an integral piece of the puzzle, but it isn't the only thing you need to have to change behavior – we are, in a way, addicted to many of the sites we compromise our own privacy to access.
It would be interesting to see how the value we place on online privacy develops and changes as people become more aware of the increasingly competitive, privacy-preserving alternatives to the sites they're used to using. We may not have something quite like Google yet, but sites like DuckDuckGo are getting pretty close with smaller pools of resources.
Conclusion
Amongst millions upon millions of people, the privacy paradox is alive and well, and it's unclear if they understand the path their personal information takes when they agree to privacy notices online.
The issue, as has become customary with any internet-induced problem, is that it always feels like we're playing catch-up; GDPR rules implemented in 2018 could and should have been actioned years prior, whilst disinformation disasters are starting to seriously impact democracies around the world without much serious reply. An internet-related issue is yet to arise that humanity, by and large, has really been on the front foot with.
Whether we like it or not, when it comes to privacy, we're operating under a situation where the burden of responsibility has been shifted unfairly onto individuals – so dynamically arming users with the right tools and knowledge is crucial. But quite radical, education-based change – not only does the internet throw new problems at us all the time, it does so in an arena where many of our values and norms collapse. We have to be better prepared.
That citizens value their privacy is one of the basic tenets upon which real-world societies function. But unless we work out how to better translate this into online spaces, for tech's biggest players, it will remain a mere afterthought.