Today, artificial intelligence is already playing a substantial role in our increasingly connected lives. As the European Commission stated in a report last April, “from using a virtual personal assistant to organise our working day, to travelling in a self-driving vehicle, to our phones suggesting songs or restaurants that we might like, AI is a reality.” And with tech giants like Google and Amazon investing millions of dollars in AI, it is a sure bet that innovation in artificial intelligence will only continue to advance.
It’s worth pausing over the consequences this technology from a privacy standpoint. The key to successful AI is not just processing power, but also massive amounts of data. The larger and more in-depth the datasets AI has access to, the more accurate its decisions will be. Companies are therefore incentivized to collect or buy large and diverse amounts of data in order to advance AI technology. According to a report by the Center for Information Policy Leadership, artificial intelligence “broadens the types of and demand for collected data, for example, from the sensors in cell phones, cars and other devices.”
AI and De-Identification
There is therefore an apparent tension between the drive towards innovation in artificial intelligence and the right to privacy. New regulations, like the California Consumer Privacy Act of 2018 (CCPA) and the EU’s General Data Protection Regulation (GDPR), pose challenges to some of the collection techniques deployed in order to gather data for AI. Section 22 of the GDPR, for instance, address concerns surrounding AI and automated decision-making head on, stating that individuals “have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her,” unless that decision “is based on the data subject’s explicit consent.”
One keyword here is profiling. According to the GDPR, profiling is:
any form of automated processing of personal data evaluating the personal aspects relating to a natural person, in particular to analyse or predict aspects concerning the data subject’s performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements.
The problem, however, is that AI is explicitly designed to process and compare large sets of data at lightning speed, which only makes identifying an individual extremely simple. Researchers from MIT and Berkeley published a study in December in which they took de-identified data of test subjects’ step counts and were able to use machine learning to re-identity subjects with almost 95% accuracy. According to a lead researcher of the study, “advances in AI make it easier for companies to gain access to health data, the temptation for companies to use it in illegal or unethical ways will increase. Employers, mortgage lenders, credit card companies and others could potentially use AI to discriminate based on pregnancy or disability status, for instance.”
Consent in Context
Given the unprecedented speed at which AI processes data, it becomes inconceivable that consent provisions like the GDPR’s can actually be enforced. The challenge is that the GDPR doesn’t incorporate the diverse contexts in which AI processes data. A self-driving car must be able recognize pedestrians, for instance, and cannot reasonably receive consent in that context.
Something like the GDPR’s right to erasure will therefore start to play a larger role. The right to erasure states that data must be forgotten when it is no longer necessary in relation to the task performed, or when the individual has withdrawn consent. Placing the focus on the right the erasure would allow AI to first process the necessary data, then recognize the context that data is in, and to then receive consent relative to those contexts. Chief Privacy Officer at Cisco, Michelle Dennedy, gives the example of a machine asking consent for specific tasks:
“It might say, ‘Okay, well I understand you have a dataset served with this platform… and this platform over here. Are you willing to actually have that data be brought together to improve your housekeeping?’ And you might say ‘no’” He says, ‘Okay. But would you be willing to do it if your heart rate drops below a certain level and you’re in a car accident?’ And you might say ‘yes.’
AI Innovation and Privacy in Tandem
Above all, the concern for privacy within AI technology comes down to the need to promote the continued recognition of the individual’s rights within a free society, virtual or otherwise. Future development therefore needs to be built upon a framework that guides AI applications based on privacy principles. Artificial intelligence may very well save lives, but must learn to do so without denying individual rights and freedoms those same people demand in every other aspect of their lives.