This month Blackbaud, a cloud computing provider primarily serving nonprofits and educational institutions, announced that the company suffered a ransomware attack back in May. The company’s response, however, has raised more than a few eyebrows from security experts, and left hundreds of nonprofits scrambling to figure out if they’ve been affected. The Blackbaud breach is just the latest reminder that third party data processors can be a liability to your business.
According to Blackbaud’s statement about the breach, the company quickly discovered the attack and was able to remove the attackers from their systems — but not before the hackers stole a copy of a data set. Blackbaud has not specified the exact nature of that data, but claims it does not include sensitive information such as credit card information, bank account information, or social security numbers. On source told the BBC, however, that the stolen data involves donor information from hundreds of nonprofits and institutions and includes details such as names, addresses, ages, and estimated wealth. Now, organizations that are customers of Blackbaud are scrambling to see if their donors’ information was included in the breach and, if so, must release data breach disclosures of their own.
The most egregious part of the Blackbaud breach, however, was the company’s response. When they discovered their data had been stolen, they agreed to pay a ransom to have the attackers delete that data. Subsequently, Blackbaud assured their customers that there is no reason to believe the stolen data “was or will be misused; or will be disseminated or otherwise made available publicly.” However, cybersecurity experts have been quick to point out that this is a dangerous assumption to make.
Firstly, they got ransom’d but sounds like the actor also had a copy of the data. They paid the ransom and somehow believe that the (criminal) actor kindly removed their copy of the @blackbaud data: https://t.co/VrR5my2S8U
Despite Blackbaud’s insistence that the data has been deleted by the hackers, the company has not stated why they are confident in that assumption, and no external investigation has been able to confirm it. As many have noted, Blackbaud’s response to the breach seems more an attempt to protect their brand’s reputation, rather than a transparent disclosure. There are also questions about the amount of time the company took to disclose the breach, and whether or not that violates GDPR requirements.
The fact that so many questions about the Blackblaud breach are still unanswered two weeks after it was announced has not been assuring to the nonprofits that use their services. Over 100 organizations have already notified their donor’s about the breach, and more will likely do so in the weeks ahead.
While this far from the only third-party provider to suffer a data breach, the attack on Blackbaud is a rather stark example of why businesses need to take the time to carefully evaluate third-party security practices, as well as insist on strong agreements that define accountability and responsibilities in the event of an incident. This is especially important for associations and non-profits because their very existence relies on the trust that their members or donors place in them. When that trust is violated, it takes a long time to repair.
Business ID theft is not a new problem. Dun and Bradstreet, a data analytics company that handles credit checks for many businesses, reported a 100% increase in ID theft against businesses in 2019. This year, however, the problem has grown out of control, with a stunning 258% spike in business identity theft since the beginning of 2020. This is in large part directly related to the COVID-19 pandemic, because scammers will steal business information to illegally gain access to relief funds and loans.
According to reports, there are groups of cyber scammers that target small businesses for ID theft throughout the United States. The groups will start by looking up business records through the Secretary of State website, identity the officers and owners connect to the company, then find corresponding tax ID and social security numbers on the dark web. These groups will then forge official documents with this information and submit them to the Secretary of State with a mailing address that they control. Traditionally, they will use these documents to update profiles on credit monitoring sites, like Dun and Bradstreet, and apply for credit lines with companies like Staples, Home Depot and Office Depot. Now, however, these groups have switched their tactics and are carrying out business ID thefts for COVID-related federal assistance, such as unemployment payments or relief loans for small businesses.
As we’ve wrote about before, hackers and scammers will often take advantage if times of crisis, confusion, and uncertainty in order to make money or seed further chaos. Given the dramatic rise of business ID theft throughout the COVID pandemic, small businesses should take steps to protect themself against this threat. The most effective way to detect and prevent ID theft is to regularly monitor and update your business information. This includes keeping an eye on your financial records and credit lines to spot potential fraudulent activity, as well as checking your business records with the federal and state government. If you spot a any changes your records that you don’t recognize, it’s a likely sign someone is in the process of stealing your business’ identity.
It can be hard to regularly monitor your records and stay vigilant when you are trying to keep your business afloat throughout the pandemic, but this is exactly what scammers are hoping for. You don’t need to be checking your credit report every single day, but it is essential to keep as close an eye as possible on your records to ensure you and your business are protected from fraud.
By now it is commonly understood that free online services such as social media, search engines, and emails are not actually that free. Instead, we use those services in exchange for data about who we are and what we want, which can then be used to show us highly targeted advertising or even just sold to third-party companies. Our very identities are now the most valuable object in the world and we give it to tech giants every single day.
That’s why there is a growing movement among some lawmakers to make companies pay consumers for the data they use. Data dividends, as it’s called, is now being pushed by politicians like Andrew Yang and California governor Gavin Newsom who argue that, by ensuring companies are paying users for their data, consumers will be empowered to take more control of their online identity and privacy.
The problem, however, is once you take a closer look at the concept Yang and Government Newsom are pushing, it becomes clear that this project, which is meant to promote privacy, ends up reinforcing a system that commodifies consumer data and disincentives privacy-first practices. We are treading a dangerous path if we attempt to monetize identity.
Here is why:
Paying consumers for their data doesn’t protect their privacy. Instead it ends up justifying the current practice of data mining that undermines the right to privacy. Certain companies are already using similar practices. Amazon, for example, offered $25 Amazon gift cards for full body 3D scans of their users. It’s a dramatic example, but fundamentally equivalent to what lawmakers are now proposing.
The concept of privacy is literally a human right and as such cannot be bought and sold in a free and open society. It’s like saying that companies can take away your right of free expression so long as they compensate you for it. Making money off of and sharing user data with third-parties has already been normalized by tech companies, and data dividends only further validates these practices.
This isn’t to say that distributing the money earned from personal information back to the consumer is necessarily a bad thing, it’s simply an issue entirely separate from privacy. If companies are required to give out data dividends, it would in no way lessen the importance of ensuring the privacy of our identities and data.
On Wednesday, The New York Department of Financial Services (NYDFS) announced their first ever cybersecurity charges against title insurance company First American for a data breach that exposed hundreds of millions of records containing sensitive information over the course of nearly five years.
The First American data breach initially occurred in October 2014 after an error in an application update left 16 years worth of mortgage title insurance records available to anyone online without authentication. These documents included information such as social security numbers, tax records, bank statements, and drivers license images. The error went undetected until December 2018, when First American conducted a penetration test that discovered the venerability. According to the NYDFS, however, First American did not report the breach and left the documents exposed for another 6 months, until a cybersecurity journalist discovered and published about the breach.
Charges against First American for their role in the data breach is the first time the NYDFS is enforcing the department’s cybersecurity regulations established in 2017. The regulation requires financial organizations with a license to operate in New York to establish and follow a comprehensive cybersecurity policy, provide training for all employees, implement effective access controls, and conduct regular venerability tests in line with a cybersecurity risk assessment.
First American is facing 6 charges, including failing to follow their internal cybersecurity policy, misclassifying the exposed documents as “low” severity, as well as failing to investigate and report the breach in a timely manner.
While the fine for a violation of the regulation is only up to $1,000, the NYDFS considers each exposed document as a separate violation. So, with up to 885 million records potentially exposed, First American could be looking at millions of dollars in fines if the charges stick.
News of the charges should serve as a wake-up call to U.S. organizations unconcerned with cybersecurity regulations. While the U.S. does not have any federal regulations, and there are a number of state regulations that have gone into effect in the past 5 years. This is merely one of what is likely many companies that will face enforcement unless they take steps now to ensure compliance.
Last week the top court in the European Union found that Privacy Shield, the framework used to transfer data between the E.U. and the U.S., does not sufficiently protect the privacy of E.U. citizens. and is therefore invalid. The courts decision has left many businesses scrambling and throws the difference between E.U and U.S. privacy standards in stark relief.
Privacy Shield was a data sharing framework enacted by the E.U. courts in 2015. Since then, however, the E.U. established the General Data Protection Regulation (GDPR) three years later, which places stricter privacy requirements when processing the data of E.U. citizens. According to the Washington Post, over 5,300 companies — including Facebook, Google, Twitter, and Amazon — that signed up to use the Privacy Shield framework now need to find a new way to handle the data of E.U. citizens in the United States.
The court made their decision after privacy expert Max Schrems filed a complaint against Facebook for violating his privacy rights under the GDPR once Facebook moved his information to the U.S. for processing. While the GDPR does allow the data of E.U. citizens to be transferred to other countries, that data must continue to comply with the GDPR standards after it is transfer. The problem with Privacy Shield, according to the E.U. decision, is that the U.S. government has wide-reaching access to personal data stored in the United States. And while the E.U. acknowledges that government authorities may access personal information when necessary for public security, the courts ultimately found that the U.S. does not meet the requirements of the GDPR “in so far as the surveillance programmes…. are not limited to what is strictly necessary.”
This decision starkly highlights the differences not only in E.U. and U.S. privacy regulations but also the privacy standards used in surveillance activities. In a statement to the Washington Post, Schrems said, “The court clarified…that there is a clash of E.U. privacy law and U.S. surveillance law. As the E.U. will not change its fundamental rights to please the [National Security Agency], the only way to overcome this clash is for the U.S. to introduce solid privacy rights for all people — including foreigners….Surveillance reform thereby becomes crucial for the business interests of Silicon Valley.”
Moving forward, U.S. companies processing E.U. citizen data will either need to keep that data on servers within the E.U. or use standard contractual clauses (SCCs). SCCS are legally agreements created by individual organizations that cover how data is used. Of course, any SCCs will need to be compliant with the GDPR.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.