Let's face it: Most of us don't really know how social media really works, how to get our Facebook newsfeed to obey us, or how geo-tagging functions.
Unless you're a techie or computer geek, you hardly know the extent to which you're sharing information about yourself on the internet to complete strangers, advertisers or, most alarmingly, government agencies. You don't need to be tech-challenged to be confused about privacy settings on social media – rather, the settings are made to be incomprehensible to most human beings.
Last week, Facebook published an update to its user policy declaring its intent to develop a facial recognition feature incorporating one billion users' data into yet another database. Facebook will use profile photos as a form of digital fingerprint, and users will be automatically recognised and tagged in any photo in which she or he may appear on Facebook. This is not just a way to trace every single move a person makes, but will also be a powerful research tool.
But Facebook assures you that you can opt out of the feature by turning off the tag suggest option. Now, if you could only figure out where it's located within the labyrinthine privacy settings menu.
Users should take note of the remarkable language with which Facebook presents this new technology. Facebook's Chief Privacy Officer Erin Egan described the feature as "giving users better control over their personal information. Our goal is to facilitate tagging so that people know when there are photos of them on our service".
Surely, one of their goals could be to facilitate user experience – but this is trivial when compared with the commercial and surveillance potential of such a tool. Facebook uses the illusion of control while treading on the sticky surface of privacy. Users are made to think they will have greater control over their information and manoeuvre their personal data to their liking.
Using a happy-go-lucky, "sharing is better" lingo, Facebook claims that facial recognition is there to make our lives better and easier. While trying to opt out of the feature in the settings, Facebook says: "We encourage you to consider how tag suggestions benefit you and your friends. Our tagging tools are meant to make it easier for you to share your memories and experiences with your friends."
Legal quandaries The legal questions surrounding facial recognition data are blurry. The future use of the collected data remains uncertain, with no promises being made against its misuse or its sale to third parties. Although Facebook promises it will always keep users in control of their information, such informal statements in the absence of a larger governing structure protecting civilians from privacy abuses do not have much significance.
On August 20, right around the time facial recognition resurfaced, Mark Zuckerberg published an article on Facebook's official blog. In a ten-page manifesto of sorts, the connectivity prophet claims that one of society's biggest problems is that a vast majority of people in the world don't have internet access. He then proceeds to unravel his stoic plan to overcome this challenge by listing ways in which technology companies could provide internet access at a low cost. Zuckerberg's little red book of connectivity is laden with tech talk that treats privacy as a barrier to human evolution, and asks the rather comical question: "Is Connectivity a Human Right?"
When Facebook first rolled out facial recognition features three years ago, there was a strong reaction against it from privacy groups. Last week, when the social network declared that they were "reconsidering" including its users' data on their servers, they were quietly testing the waters again. Yet Facebook's swift move did not escape European legislators. Johannes Caspar, the Hamburg Commissioner for Data Protection and Freedom of Information, who had forced the social network into deleting data on European users the first time around, expressed shock that the site had the audacity to bring up facial recognition again.
The most serious concern about the technology clearly has nothing to do with its proposed use – namely, to make our lives happier by sharing moments with our friends. Such a claim sounds nice, but the recent exposes of mass surveillance programmes show that civilians have a lot to worry about. The classified information disclosed to the press by Edward Snowden reveals the National Security Agency working alongside major companies such as Google and Facebook, among others, in PRISM, the NSA's mass electronic surveillance data mining programme.
A symbiotic relationship A recent New York Times article reported that an increasing number of former Pentagon intelligence officers are joining technology startups in Silicon Valley. The article illustrates the symbiotic relationship between venture capitalists, who seek out military intelligence expertise for presumed commercial benefit, and former intelligence officers who have had access to legally questionable surveillance programmes such as Prism and XKeyscore.
With corporations and governments investing heavily to protect themselves from enemy attacks, as well as gathering intelligence on their rivals, the relationship with Silicon Valley is mutually beneficial. Former intelligence officers become hotshot entrepreneurs earning big bucks, with their military background acting as a unique selling point for clients, while private companies invest in former officers to beat out rivals through intelligence.
Facebook's "human connectivity" project, with features such as facial recognition and geo-tagging, also makes use of this intelligence. In 2012, Facebook bought the company Face.com for $60m to acquire its facial recognition technology. Face.com was an Israeli startup that pioneered the technology based on experience with sophisticated military software. In fact, when it comes to tech startups there is often an exchange of talent between Israel's booming "Silicon Wadi" and its Californian counterpart.
There was a time when sharing your data was seen positively, as the democratisation of the internet, sharing ideas and knowledge with the rest of the world at no cost. So we all became bloggers and downloaded every possible app to tell our stories to the world. But now many of us are wondering: Just how much information do they have on us? How many surveillance programmes have to be exposed before we start challenging that seemingly innocent "tag" button on Facebook? In the end, who can guarantee that our online behaviour won't publicly embarrass us or even incriminate us in the future?
As the lines between public and commercial interests become blurred and more services are exchanged between government and private companies, like the news of AT&T supplying data for drug enforcement officers, there is growing need for a system that protects citizens from privacy abuses.
We're becoming deeply embedded in this brave new world of social sharing. We're living in a time when not wanting to share the most private information about our lives is increasingly treated as anti-social behaviour. We often find ourselves bullied into submission, not only by the hundreds of apps on our smartphones and computers, but also by our peers.
Are we becoming a surveillance society far more quickly than we could imagine? Zuckerburg tells us that sharing is one of our basic human rights, so how could it possibly hurt us? While reflecting on this supposed basic human right, we must also take note of the alarming comment made by Facebook's chief privacy officer, Erin Egan: "Can I say that we will never use facial recognition technology for any other purposes? Absolutely not."
It is exactly this attitude that reveals the vulnerability of our privacy and the ambiguous future of our data. In the face of such a threat, we must at least assert our rights and be aware of the sneaky ways new surveillance tools are being introduced into our lives – especially those that we ourselves consent to.