Google Fined 50 Million Euros for Violations of EU’s New GDPR

The Commission nationale de l’informatique et des libertés (CNIL), France’s nation data protection authority, has just levied a 50 million euro fine on Google for violations of the EU’s General Data Protection Regulation. The GDPR was implemented in May of last year and, widely considered the strictest data regulations in effect, notably gives much of the control back to the consumer, including opt-in consent for the use of private information. Google is appealing the decision.

 

The CNIL found Google in violation of two aspects of the GDPR:

 

First, Google failed to make properly transparent information regarding the use of consumer data. According to the report, “essential information, such as the data processing purposes, the data storage periods or the categories of personal data used for the ads personalization, are excessively disseminated across several documents, with buttons and links on which it is required to click to access complementary information.”

 

Second, Google failed to gain valid consent to process data for ad personalization. The key word here is valid. Google does in fact obtain consent from users, but the CNIL found this consent was not sufficiently informed. “The information on processing operations for the ads personalization is diluted in several documents and does not enable the user to be aware of their extent.” Moreover, the consent obtained was not considered to be “specific or unambiguous”. Google allows users to access ad configuration, however, “the user not only has to click on the button ‘More options’ to access the configuration, but the display of the ads personalization is moreover pre-ticked.” The consent is therefore not obtained with the “clear affirmative action from the user” required for the consent to be considered valid.

 

While Google is not the first company to be fined for violating the GDPR, it is the largest fine received under the new regulations by far. However, the damage could have been a lot worse for Google. Organizations can be fined up to 4% of their annual global revenue, and with 33.7 billion in revenue last quarter alone, Google might consider themselves lucky.

 

Google’s Appeal May Help Clarify the Scope of the CNIL’s Ruling

 

In a statement to Politico, however, Google confirmed they will be appealing the CNIL’s decision: “We’ve worked hard to create a GDPR consent process for personalized ads that is as transparent and straightforward as possible, based on regulatory guidance and user experience testing. We’re also concerned about the impact of this ruling on publishers, original content creators and tech companies in Europe and beyond. For all these reasons, we’ve now decided to appeal.”

 

Google’s claim that their consent process is based on regulatory guidance and user experience testing may point to their argument:  that they followed in good faith regulator guidance (either specific and targeted guidance or public guidance) and the user testing; the regulators may say the consent is not informed and Google might try to refute that via an analysis of its user testing.

 

Google is also appealing to the concerns from companies in other industries on how the GDPR may affect them. Echoing these concerns, CCO of the Financial Times, Jon Slade, told Digiday “the interpretation of GDPR has been inconsistent at best, and in some cases has willfully chosen to ignore both the letter and the spirit of the regulation. The industry now can’t say it hasn’t been warned.” While Google is likely overstating their concern for other industries, the appeal process may at the very least lead to clarify the definition and scope of certain aspects of the GDPR.  


The CNIL’s decision is therefore an essential reminder for any business that transparency and consent is increasingly becoming the name of the game. As the example of Google makes clear, simply having information available to consumers is not enough, that information needs to “intelligible and easily accessible.” While in the United States there are no federal data protections laws with the same scope of the GDPR, states such as California are beginning to pass regulations similar to those in the EU. Companies not currently affected by such regulations therefore still prioritize data processing and put in a place a plan that would allow quick and easy compliance with any new regulations that may be implemented. Or, as Jon Slade puts it, “anyone handling data would be crazy not to look at this strong enforcement of GDPR and double-check themselves.”

 

     

    Please Sir, May I have Another? Why Businesses should better understand their Cybersecurity Risks

    In the recent Op-Ed, Maybe We Have the Cyber Security We Deserve, Roger Grimes makes the argument that, despite the failings of current cyber security practices, there is a certain lack interest on the part of consumers with respect to the  protection of their identity and data online. This attitude, Grimes argues, results from an increasing focus on incidence response rather than prevention. That is, for the average consumer, the inconvenience of stolen data is decreasing: if credit card information gets stolen, you likely won’t be responsible for fraudulent charges; if your login information is compromised, you just change your password. The impact of data breaches on consumers is often rather low, and therefore, without a catastrophic event — a ‘digital 9/11,’ as Grimes puts it — there doesn’t seem to be much urgency for comprehensive change. “We are OK with OK security,” Grimes says. And maybe, he concludes, that’s good enough.

     

    Grimes is not wrong in diagnosing the state of cyber security as reactive rather than proactive. And it is true that incident response should play an important role in cyber security; for businesses it is crucial for mitigating larger losses and reducing overall costs. The problem with Grimes’ argument, however, is that it places the focus on the consumers rather than on businesses and their leadership. Instead of asking why users don’t care about security, we should instead ask why companies don’t care about security.

     

    Organizations have to contend with and prioritize their position relative to risk every day. One would think that, given the scrutiny in the press after every new breach, companies would focus more on cybersecurity risk. But they don’t. The problem that Grimes neglects, however, is the the cultural and semantic disconnect between the technology and the business leadership.

     

    The Equifax breach, for example, happened in large part because a governance structure that stilfied communication between security and IT. According to the report by the House Oversight Committee, the CSO used to report directly to the CIO, but because of personal difference Equifax decided to have the CSO report instead to legal. When others came to fill those positions, however, the structure remained the same. Therefore, according to the report, “collaboration between IT and Security mostly occurred when required, such as when Security needed IT to authorize a change on the network. Communication and coordination between these groups was often inconsistent and ineffective at Equifax.”

     

    While Equifax’s executive structure particularly facilitated a breakdown in communication, having the CSO report to the CIO might not have been good enough. The House’s report goes on to say that “Equifax’s CEO did not prioritize cybersecurity” and that “the CSO was not considered part of the senior leadership team.” This structure therefore excluded the CSO from quarterly senior leadership meetings.

    Shifting the focus to prevention requires businesses to think about the handling of private information that consumers entrust within the context of their overall enterprise strategy. And this is something that can only start at the top. An article for the Harvard Business Review emphasizes the inclusion of security executives in board meetings and other meetings about business priorities. “By including [security executives] in discussions about immediate and long-term business priorities, customer issues, and overall strategies, directors can ensure that the company’s security plan aligns with the company’s business goals.”

     

    Roger Grimes is right, consumers have by and large accepted the inconveniences of data breaches, but the point is that it is up to businesses and their leadership to realize what is directly in from of them: technology and security is not just one aspect of the business, it is the business — and increasingly so. As Cybersecurity expert Bruce Schneier states, consumers will change their views as “Automation, autonomy, and physical agency will make computer security a matter of life and death, and not just a matter of data.” Companies would therefore do well to bridge the gap between its business and its technology, creating a proactive culture that works to protect its most critical asset: their customers.

     

       

      First Insurance Data Security Act Goes into Effect in South Carolina

      As of the first of this year the South Carolina Insurance Data Security Act has gone into effect. These regulations are based primarily on the National Association of Insurance Commissioners’ Data Security Model Law and are the first of its kind in the U.S. However, given increasing public scrutiny on how business handle sensitive information, it is likely such regulations will be taken up by other states in the years to come. New York, for instance, already has in place similar regulations via the Department of Financial Services. Not even to mention the California Consumer Privacy Act of 2018. Insurance Carriers, brokers, agents and other licensed entities should therefore take some time to familiarize themselves with these new regulations.

       

      The South Carolina Insurance Data Security Act contains two major aspects:

       

      1. It requires any “person licensed, authorized to operate, or registered, or required to be licensed, authorized, or registered pursuant to [ ] insurance laws” to notify the state within 72 hours of any cyber security event. The regulation defines such an event as any “resulting in unauthorized access to or the disruption or misuse of an information system or information stored on an information system.”

       

      1. Licensee’s are required to maintain a comprehensive information security program that details how the company will protect the security and confidentiality of private information against the outside threats. Companies must conduct a full risk assessment of a cyber security event in order to then design and implement a program to mitigate identified risk.

       

      1. Licensees will also be required to implement a third party provider program and to require their providers implement appropriate administrative, technical and physical measures to protect non-public information and relevant systems.

       

      It must be noted that these regulations not only pertains to insurance companies, but will also impact insurance brokers, agents other licenses and their third party vendors. The first deadline is a written security program in place by July 1, 2019.  The implementation of a third party provider program needs to be in place by July 1 2020.

       

      Moreover, the regulations themselves could easily be applied to fields outside of insurance. The concept of an information security program, for instance, is something that any business handling private information should begin considering in the event that similar regulations are applied across other states and in different sectors.