Nothing Up My FB Sleeve

Two weeks ago,  Mark Zuckerberg penned an essay detailing Facebook’s shift towards a more privacy-focused platform. “As I think about the future of the internet,” he writes, “I believe a privacy-focused communications platform will become even more important than today’s open platforms.” For Zuckerberg, this predominantly means focusing efforts more on his private messaging services (Facebook Messenger, Instagram Direct, and Whatsapp) by including end-to-end encryption across all platforms.

 

But given mirad privacy scandals plaguing Facebook over the past few years, it is important to look critically at what Zuckerberg is outlining. Many of the critiques of Zuckerberg that have been written focus primarily on the monopolistic power-grab that he introduces under the term “interoperability.” For Zuckerberg, this means integrating private communications across all of Facebook’s messaging platforms. From a security perspective, the idea is to be able to standardize end-to-end encryption across a diversity of messaging platforms (including SMS), but, as the MIT Technology Review points out, this amounts to little more than a heavy-handed centralization of power: “If his plan succeeds, it would mean that private communication between two individuals will be possible when Mark Zuckerberg decides that it ought to be, and impossible when he decides it ought not to be.”

 

However, without downplaying this critique, what seems just as if not more concerning is concept of privacy that Zuckerberg is advocating for. In the essay, he speaks about his turn towards messaging platforms as a shift from the town square to the “digital equivalent of a living room,” in which our interactions are more personal and intimate. Coupled with end-to-end encryption, the idea is that Facebook will create a space in which our communications are kept private.

 

But they won’t, because Zuckerberg fundamentally misrepresents how privacy works. Today, the content of what you say is perhaps the least important aspect of your digital identity. Instead, it is all about the metadata. In terms of communication, the who, the when, and the where can tell someone more about you then simply the what. Digital identities are constructed less by what we think and say about ourselves, and far more through a complex network of information that moves and interacts with other elements within that network. Zuckerberg says that “one great property of messaging services is that even as your contacts list grows, your individual threads and groups remain private,” but who, for example, has access to our contact lists? These are the type of questions that Zuckerberg sidesteps in his essay, but are the ones that show how privacy actually functions today.

 

Like a living room, we can concede that end-to-end encryption will give users more confidence that their messages will only be seen by the person or people within that space. But digital privacy does not function on a “public vs. private sphere” model. If it is a living room, it has the equivalent of a surveillance team stationed outside, recording who enters, how long they stay there for, how that room is accessed, etc. For all his failings, we would be wrong to assume that Zuckerberg is ignorant of the importance of metadata. In large part he has built is fortune on it. What we see in his essay, then, is little more than a not-so-subtle misdirect.

Do Androids Dream of Your Privacy?

 

Today, artificial intelligence is already playing a substantial role in our increasingly connected lives. As the European Commission stated in a report last April, “from using a virtual personal assistant to organise our working day, to travelling in a self-driving vehicle, to our phones suggesting songs or restaurants that we might like, AI is a reality.” And with tech giants like Google and Amazon investing millions of dollars in AI, it is a sure bet that innovation in artificial intelligence will only continue to advance.

 

It’s worth pausing over the consequences this technology from a privacy standpoint. The key to successful AI is not just processing power, but also massive amounts of data. The larger and more in-depth the datasets AI has access to, the more accurate its decisions will be. Companies are therefore incentivized to collect or buy large and diverse amounts of data in order to advance AI technology. According to a report by the Center for Information Policy Leadership, artificial intelligence “broadens the types of and demand for collected data, for example, from the sensors in cell phones, cars and other devices.”

 


AI and De-Identification

 

There is therefore an apparent tension between the drive towards innovation in artificial intelligence and the right to privacy. New regulations, like the California Consumer Privacy Act of 2018 (CCPA) and the EU’s General Data Protection Regulation (GDPR), pose challenges to some of the collection techniques deployed in order to gather data for AI. Section 22 of the GDPR, for instance, address concerns surrounding AI and automated decision-making head on, stating that individuals “have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her,” unless that decision “is based on the data subject’s explicit consent.”

 

One keyword here is profiling. According to the GDPR, profiling is:

 

any form of automated processing of personal data evaluating the personal aspects relating to a natural person, in particular to analyse or predict aspects concerning the data subject’s performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements.

 

The problem, however, is that AI is explicitly designed to process and compare large sets of data at lightning speed, which only makes identifying an individual extremely simple. Researchers from MIT and Berkeley published a study in December in which they took de-identified data of test subjects’ step counts and were able to use machine learning to re-identity subjects with almost 95% accuracy. According to a lead researcher of the study, “advances in AI make it easier for companies to gain access to health data, the temptation for companies to use it in illegal or unethical ways will increase. Employers, mortgage lenders, credit card companies and others could potentially use AI to discriminate based on pregnancy or disability status, for instance.”

 

Consent in Context

 

Given the unprecedented speed at which AI processes data, it becomes inconceivable that consent provisions like the GDPR’s can actually be enforced. The challenge is that the GDPR doesn’t incorporate the diverse contexts in which AI processes data. A self-driving car must be able recognize pedestrians, for instance, and cannot reasonably receive consent in that context.

 

Something like the GDPR’s right to erasure will therefore start to play a larger role. The right to erasure states that data must be forgotten when it is no longer necessary in relation to the task performed, or when the individual has withdrawn consent. Placing the focus on the right the erasure would allow AI to first process the necessary data, then recognize the context that data is in, and to then receive consent relative to those contexts. Chief Privacy Officer at Cisco, Michelle Dennedy, gives the example of a machine asking consent for specific tasks:

 

“It might say, ‘Okay, well I understand you have a dataset served with this platform… and this platform over here. Are you willing to actually have that data be brought together to improve your housekeeping?’ And you might say ‘no’” He says, ‘Okay. But would you be willing to do it if your heart rate drops below a certain level and you’re in a car accident?’ And you might say ‘yes.’

 

AI Innovation and Privacy in Tandem

 

Above all, the concern for privacy within AI technology comes down to the need to promote the continued recognition of the individual’s rights within a free society, virtual or otherwise. Future development therefore needs to be built upon a framework that guides AI applications based on privacy principles. Artificial intelligence may very well save lives, but must learn to do so without denying individual rights and freedoms those same people demand in every other aspect of their lives.