Blackbaud Facing Lawsuit After Ransomware Attack

Blackbaud Facing Lawsuit After Ransomware Attack

In July, we wrote about a ransomware attack suffered by the cloud computing provider Blackbaud that led to the potential exposure of  personal information entrusted to Blackbaud by hundreds of non-profits, health care organizations, and educational institutions. At the time the ransomware attack was announced, security experts questioned Blackbaud’s response to the breach. Now, the Blackbaud ransomware attack isn’t just raising eyebrows, with the company facing a class action lawsuit for their handling of the attack.

Blackbaud was initially attacked on February 7th of this year. However, according to the company, they did not discover the issue until mid-May. While the time it took the company to detect the intrusion was long, it is increasingly common for threats to go undetected for long periods of time. What really gave security experts pause is how Blackbaud responded to the incident after detecting it.

The company was able to block the hacker’s access to their networks, but attempts to regain control continued until June 3rd. The problem, however, was that the hackers had already stolen data sets from Blackbaud and demanded a bitcoin payment before destroying the information. Blackbaud remained in communication with the the attackers until at least June 18th, when the company payed the ransom. Of course, many experts questioned Blackbaud’s decision to pay given that there is no way to guarantee the attackers kept their word. And, to make matters worse, the company did not public announce the incident to the hundreds of non-profits that use their service until July 16th  — nearly two months after initially discovering the incident.

Each aspect of Blackbaud’s response to the ransomware attack is now a part of a class action lawsuit filed against the company by a U.S. resident on August 12th. The main argument of the lawsuit claims that Blackbaud did not have sufficient safeguards in place to protect the private information that the company “managed, maintained, and secured,” and that Blackbaud should cover the costs of credit and identity theft monitoring for those affected. The lawsuit also alleges that Blackbaud failed to provide “timely and adequate notice” of the incident. Finally, regarding Blackbaud’s payment of the ransomware demand, the lawsuit argues that the company “cannot reasonably rely on the word of data thieves or ‘certificate of destruction’ issued by those same thieves, that the copied subset of any Private Information was destroyed.”

Despite the agreement among privacy experts that Blackbaud’s response to the attack was anything but perfect, lawsuits pertaining to  data breaches have historically had a low success rate in the U.S.. According to an attorney involved in the case, showing harm requires proving a financial loss rather than relying on the more abstract harm caused by a breach of privacy: “The fact that we don’t assign a dollar value to privacy [means] we don’t value privacy.”

Whatever the result of the lawsuit, questions still persist on whether Blackbaud’s response violates the E.U.’s General Data Protection Regulation. The GDPR requires organizations to submit notification of a breach within 72 of discovery. Because many of Blackbaud’s clients are UK-based and the company took months to notify those affected, it is possible Blackbaud could recevie hefty fines for their response to the attack. A spokesperson for the UK’s Information Commissioner’s Office told the BBC that the office is making enquiries into the incident.

As for the non-profits, healthcare organizations, and educational institutes that were affected by the breach? They have had to scramble to submit notifications to their donors and stakeholders that their data may have been compromised. Non-profits in particular rely on their reputations to keep donations coming in. While these organizations were not directly responsible  for the breach, this incident highlights the need to carefully review third-party vendors’ security policy and to create a written security agreement with all vendors before using those services.

Privacy and the Commodification of Identity

Privacy and the Commodification of Identity

By now it is commonly understood that free online services such as social media, search engines, and emails are not actually that free. Instead, we use those services in exchange for data about who we are and what we want, which can then be used to show us highly targeted advertising or even just sold to third-party companies. Our very identities are now the most valuable object in the world and we give it to tech giants every single day.

That’s why there is a growing movement among some lawmakers to make companies pay consumers for the data they use. Data dividends, as it’s called, is now being pushed by politicians like Andrew Yang and California governor Gavin Newsom who argue that, by ensuring companies are paying users for their data, consumers will be empowered to take more control of their online identity and privacy.

The problem, however, is once you take a closer look at the concept Yang and Government Newsom are pushing, it becomes clear that this project, which is meant to promote privacy, ends up reinforcing a system that commodifies consumer data and disincentives privacy-first practices. We are treading a dangerous path if we attempt to monetize identity.

Here is why:

Paying consumers for their data doesn’t protect their privacy. Instead it ends up justifying the current practice of data mining that undermines the right to privacy. Certain companies are already using similar practices. Amazon, for example, offered $25 Amazon gift cards for full body 3D scans of their users. It’s a dramatic example, but fundamentally equivalent to what lawmakers are now proposing.

The concept of privacy is literally a human right and as such cannot be bought and sold in a free and open society. It’s like saying that companies can take away your right of free expression so long as they compensate you for it. Making money off of and sharing user data with third-parties has already been normalized by tech companies, and data dividends only further validates these practices.

This isn’t to say that distributing the money earned from personal information back to the consumer is necessarily a bad thing, it’s simply an issue entirely separate from privacy. If companies are required to give out data dividends, it would in no way lessen the importance of ensuring the privacy of our identities and data.

First American Facing Hefty Fines for Data Breach

First American Facing Hefty Fines for Data Breach

On Wednesday, The New York Department of Financial Services (NYDFS) announced their first ever cybersecurity charges against title insurance company First American for a data breach that exposed hundreds of millions of records containing sensitive information over the course of nearly five years.

The First American data breach initially occurred in October 2014 after an error in an application update left 16 years worth of mortgage title insurance records available to anyone online without authentication. These documents included information such as social security numbers, tax records, bank statements, and drivers license images. The error went undetected until December 2018, when First American conducted a penetration test that discovered the venerability. According to the NYDFS, however, First American did not report the breach and left the documents exposed for another 6 months, until a cybersecurity journalist discovered and published about the breach.

Charges against First American for their role in the data breach is the first time the NYDFS is enforcing the department’s cybersecurity regulations established in 2017. The regulation requires financial organizations with a license to operate in New York to establish and follow a comprehensive cybersecurity policy, provide training for all employees, implement effective access controls, and conduct regular venerability tests in line with a cybersecurity risk assessment.

First American is facing 6 charges, including failing to follow their internal cybersecurity policy, misclassifying the exposed documents as “low” severity, as well as failing to investigate and report the breach in a timely manner.

While the fine for a violation of the regulation is only up to $1,000, the NYDFS considers each exposed document as a separate violation. So, with up to 885 million records potentially exposed, First American could be looking at millions of dollars in fines if the charges stick.

News of the charges should serve as a wake-up call to U.S. organizations unconcerned with cybersecurity regulations. While the U.S. does not have any federal regulations, and there are a number of state regulations that have gone into effect in the past 5 years. This is merely one of what is likely many companies that will face enforcement unless they take steps now to ensure compliance.

E.U. and U.S. Privacy Framework Struck Down

E.U. and U.S. Privacy Framework Struck Down

Last week the top court in the European Union found that Privacy Shield, the framework used to transfer data between the E.U. and the U.S., does not sufficiently protect the privacy of E.U. citizens. and is therefore invalid. The courts decision has left many businesses scrambling and throws the difference between E.U and U.S. privacy standards in stark relief.

Privacy Shield was a data sharing framework enacted by the E.U. courts in 2015. Since then, however, the E.U. established the General Data Protection Regulation (GDPR) three years later, which places stricter privacy requirements when processing the data of E.U. citizens.  According to the Washington Post, over 5,300 companies — including Facebook, Google, Twitter, and Amazon — that signed up to use the Privacy Shield framework now need to find a new way to handle the data of E.U. citizens in the United States.

The court made their decision after privacy expert Max Schrems filed a complaint against Facebook for violating his privacy rights under the GDPR once Facebook moved his information to the U.S. for processing. While the GDPR does allow the data of E.U. citizens to be transferred to other countries, that data must continue to comply with the GDPR standards after it is transfer. The problem with Privacy Shield, according to the E.U. decision, is that the U.S. government has wide-reaching access to personal data stored in the United States. And while the E.U. acknowledges that government authorities may access personal information when necessary for public security, the courts ultimately found that the U.S. does not meet the requirements of the GDPR “in so far as the surveillance programmes…. are not limited to what is strictly necessary.”

This decision starkly highlights the differences not only in E.U. and U.S. privacy regulations but also the privacy standards used in surveillance activities. In a statement to the Washington Post, Schrems said, “The court clarified…that there is a clash of E.U. privacy law and U.S. surveillance law. As the E.U. will not change its fundamental rights to please the [National Security Agency], the only way to overcome this clash is for the U.S. to introduce solid privacy rights for all people — including foreigners….Surveillance reform thereby becomes crucial for the business interests of Silicon Valley.”

Moving forward, U.S. companies processing E.U. citizen data will either need to keep that data on servers within the E.U. or use standard contractual clauses (SCCs). SCCS are legally agreements created by individual organizations that cover how data is used. Of course, any SCCs will need to be compliant with the GDPR.

Time will tell exactly how this ruling will affect U.S. businesses with data from E.U. citizens, but this is only one of many example that the E.U. is taking consumer privacy extremely seriously. All businesses that have users within the U.S., large or small, should therefore carefully assess their privacy practices and ensure it is in line with the GDPR. At the end of the day, it’s better that have a privacy policy that is stricter than it needs to be than to scramble at the last second when the E.U. makes a new ruling like they did last week.

Compliance is Not a Security Strategy

Compliance is Not a Security Strategy

The good news: Many companies these days are using cybersecurity controls and security training for their employees. The bad news: A lot of these businesses are putting in the place the bare minimum in order to meet compliance requirements. The truth is, however, the you can be compliant but not secure. Remember the big Target breach in 2013? Hackers were able to take the debit and credit card information of millions are shoppers by accessing Target point-of-sale systems. The irony is that, just months before the attack, Target was certified PCI compliant. In the words of then-CEO Gregg Steinhafel, “Target was certified as meeting the standard for the payment card industry in September 2013. Nonetheless, we suffered a data breach.” Simply put: Target was compliant but not secure.

Creating a Culture

If your security awareness program is a “check the box” compliance program, you can bet your employees are going through the same motions as you are. How has that improved your security posture? It hasn’t.  Instead, creating a strong security program is first and foremost about creating a culture around security. And this has to start at the top, with your executive officers and your board. If business leaders set a security-focused tone, then employees will likely follow suit.

The reason a business can be compliant and not secure is because cybersecurity isn’t a one and done deal. Compliance is a state, cybersecurity is an ongoing process that involves the entire organization — from the boardroom to the cubicle. Verizon Data Breach Investigation Report shows that the human factor is the largest factor leading to breaches today. If that’s the case, perhaps instead of checking off the boxes and before investing in that new machine learning intrusion detection gizmo, consider focusing on human learning, engagement and the behaviors that can drive a mindful security culture.

GDPR Report Shows Success with Room for Improvement

GDPR Report Shows Success with Room for Improvement

The EU’s General Data Protection Regulation (GDPR), one of the most comprehensive privacy laws in the world, celebrated its two-year anniversary last month. The regulation establishes a range of privacy and data protection rights to EU citizens, such as widened conditions for consent and the right to request companies delete user data, and requires organizations to implement technical safeguards. Along with the regulation comes some pretty hefty fines. Google, for example, received a 50 million euro fine for failing to properly state how they use consumer data. The law also requires that the GDPR commission release a report evaluating the regulation after the first two years, then every four years going forward. In compliance with the law, the commission released their report this month, broadly finding the regulation a success, but also highlighting certain areas for improvement.

Strengths

According to the GDPR report, one of the regulation’s main successes is the increased awareness of the privacy rights among EU citizens, and that they are empowered to exercise those rights. The report found that 69% of the EU population above 16 has heard of the GPDR and 71% know about their country’s nation data protection agency. One issue however, is that this awareness has not fully translated into the use of these rights. The right to data portability, for example, which allows users to obtain and transfer their data, shows potential to “put individuals at the centre of the data economy,” but, according to the report, is still underutilized.

One other area of success is the flexibility of the regulation in its ability to apply to principles of the law to emerging technologies. This has been especially important recently, with the rise of the COVID-19 pandemic and the numerous tracing apps created. The report found that the GDPR has been successful in providing a framework that allows for innovation while ensuring that these new technologies are created with privacy in mind.

Areas of Improvement

Perhaps the biggest area of concern that the report highlights, is the uneven enforcement of the GDPR among EU states. All EU members states except Slovenia have adopted the law. However, the report notes that the law has not been applied consistently across the board. For example, the GDPR allows individual member states to set the age of consent for data processing, but this has created uncertainty for children and parents and made it more difficult for companies that conduct business across borders. The commission has recommended a creating codes of conduct to apply across all member states in order to allow for more consistency between states.

The GDPR report also found that there is some inconsistency when it comes to enforcing the regulations. Overall, the report found that the various data protection agencies were properly using their strengthened enforcement capabilities, but worried that resources have not been evenly divided among the agencies. While some countries that are seen as tech hubs require additional resources, the commission found that the overall budget allocation was too inconsistent.

 

Taking a step back, the GDPR report largely shows that the new regulation has had a positive impact on the views towards privacy, and has empowered individuals to take control of their information. The law, however, is still relatively new, and will continue to require tweaks to better serve consumers. Privacy regulations continue to be a work in progress, but are at least headed in the right direction.