by Doug Kreitzberg | Apr 1, 2021 | Data Breach, Regulations
In the wake of the recent SolarWinds hack, a vendor compromise that infected tightly protected government agencies, the Biden administration is reported to be planning a new cybersecurity executive order as early this week. While a National Security Council spokeswoman said no decision has been made on the final content of the executive order, among the measures being reported is a new requirement that any vendors working with federal government agencies must report any suspected breaches to those agencies.
While there have been multiple previous attempts to establish breach notification laws through congress, industry resistance has previously been successful in halting the bills from passing. But now, following the two, massive hacks of SolarWinds and Microsoft over the past few months, there may not be much vendors can do to stop it this time.
Along with the breach notification requirement, the planned cybersecurity executive order is reported to contain a series of additional security requirements for software and programs used by federal agencies. This may include requiring federal agencies to take small, but essential security measures such as the use multi-factor authentication and data encryption.
Overall, the executive order appears to create broader levels of transparency and communication between software vendors and government agencies regarding cybersecurity. For example, since many pieces of software now link directly to other programs and services, the order is reported to also require a “software bill of materials” that lays out what the software contains and what other services it connects to. According to Reuters, the order may also create a cybersecurity incident response board, encouraging communication between government agencies, vendors, and victims.
If Biden signs the executive order, this may be a the first step towards a more robust and efficient response to the increasing cyber threats government agencies are facing. According to Reuters, this may also open the door towards broader public disclosure legislation. By being transparent and openly sharing information, both government agencies and private organization will benefit by helping to identify and mitigate threats more quickly and effectively.
by Doug Kreitzberg | Sep 9, 2020 | Cyber Awareness, GDPR, ransomware
In July, we wrote about a ransomware attack suffered by the cloud computing provider Blackbaud that led to the potential exposure of personal information entrusted to Blackbaud by hundreds of non-profits, health care organizations, and educational institutions. At the time the ransomware attack was announced, security experts questioned Blackbaud’s response to the breach. Now, the Blackbaud ransomware attack isn’t just raising eyebrows, with the company facing a class action lawsuit for their handling of the attack.
Blackbaud was initially attacked on February 7th of this year. However, according to the company, they did not discover the issue until mid-May. While the time it took the company to detect the intrusion was long, it is increasingly common for threats to go undetected for long periods of time. What really gave security experts pause is how Blackbaud responded to the incident after detecting it.
The company was able to block the hacker’s access to their networks, but attempts to regain control continued until June 3rd. The problem, however, was that the hackers had already stolen data sets from Blackbaud and demanded a bitcoin payment before destroying the information. Blackbaud remained in communication with the the attackers until at least June 18th, when the company payed the ransom. Of course, many experts questioned Blackbaud’s decision to pay given that there is no way to guarantee the attackers kept their word. And, to make matters worse, the company did not public announce the incident to the hundreds of non-profits that use their service until July 16th — nearly two months after initially discovering the incident.
Each aspect of Blackbaud’s response to the ransomware attack is now a part of a class action lawsuit filed against the company by a U.S. resident on August 12th. The main argument of the lawsuit claims that Blackbaud did not have sufficient safeguards in place to protect the private information that the company “managed, maintained, and secured,” and that Blackbaud should cover the costs of credit and identity theft monitoring for those affected. The lawsuit also alleges that Blackbaud failed to provide “timely and adequate notice” of the incident. Finally, regarding Blackbaud’s payment of the ransomware demand, the lawsuit argues that the company “cannot reasonably rely on the word of data thieves or ‘certificate of destruction’ issued by those same thieves, that the copied subset of any Private Information was destroyed.”
Despite the agreement among privacy experts that Blackbaud’s response to the attack was anything but perfect, lawsuits pertaining to data breaches have historically had a low success rate in the U.S.. According to an attorney involved in the case, showing harm requires proving a financial loss rather than relying on the more abstract harm caused by a breach of privacy: “The fact that we don’t assign a dollar value to privacy [means] we don’t value privacy.”
Whatever the result of the lawsuit, questions still persist on whether Blackbaud’s response violates the E.U.’s General Data Protection Regulation. The GDPR requires organizations to submit notification of a breach within 72 of discovery. Because many of Blackbaud’s clients are UK-based and the company took months to notify those affected, it is possible Blackbaud could recevie hefty fines for their response to the attack. A spokesperson for the UK’s Information Commissioner’s Office told the BBC that the office is making enquiries into the incident.
As for the non-profits, healthcare organizations, and educational institutes that were affected by the breach? They have had to scramble to submit notifications to their donors and stakeholders that their data may have been compromised. Non-profits in particular rely on their reputations to keep donations coming in. While these organizations were not directly responsible for the breach, this incident highlights the need to carefully review third-party vendors’ security policy and to create a written security agreement with all vendors before using those services.
by Doug Kreitzberg | Jul 28, 2020 | Phishing, Regulations
By now it is commonly understood that free online services such as social media, search engines, and emails are not actually that free. Instead, we use those services in exchange for data about who we are and what we want, which can then be used to show us highly targeted advertising or even just sold to third-party companies. Our very identities are now the most valuable object in the world and we give it to tech giants every single day.
That’s why there is a growing movement among some lawmakers to make companies pay consumers for the data they use. Data dividends, as it’s called, is now being pushed by politicians like Andrew Yang and California governor Gavin Newsom who argue that, by ensuring companies are paying users for their data, consumers will be empowered to take more control of their online identity and privacy.
The problem, however, is once you take a closer look at the concept Yang and Government Newsom are pushing, it becomes clear that this project, which is meant to promote privacy, ends up reinforcing a system that commodifies consumer data and disincentives privacy-first practices. We are treading a dangerous path if we attempt to monetize identity.
Here is why:
Paying consumers for their data doesn’t protect their privacy. Instead it ends up justifying the current practice of data mining that undermines the right to privacy. Certain companies are already using similar practices. Amazon, for example, offered $25 Amazon gift cards for full body 3D scans of their users. It’s a dramatic example, but fundamentally equivalent to what lawmakers are now proposing.
The concept of privacy is literally a human right and as such cannot be bought and sold in a free and open society. It’s like saying that companies can take away your right of free expression so long as they compensate you for it. Making money off of and sharing user data with third-parties has already been normalized by tech companies, and data dividends only further validates these practices.
This isn’t to say that distributing the money earned from personal information back to the consumer is necessarily a bad thing, it’s simply an issue entirely separate from privacy. If companies are required to give out data dividends, it would in no way lessen the importance of ensuring the privacy of our identities and data.
by Doug Kreitzberg | Jul 24, 2020 | Compliance, Cybersecurity, Data Breach, Regulations
On Wednesday, The New York Department of Financial Services (NYDFS) announced their first ever cybersecurity charges against title insurance company First American for a data breach that exposed hundreds of millions of records containing sensitive information over the course of nearly five years.
The First American data breach initially occurred in October 2014 after an error in an application update left 16 years worth of mortgage title insurance records available to anyone online without authentication. These documents included information such as social security numbers, tax records, bank statements, and drivers license images. The error went undetected until December 2018, when First American conducted a penetration test that discovered the venerability. According to the NYDFS, however, First American did not report the breach and left the documents exposed for another 6 months, until a cybersecurity journalist discovered and published about the breach.
Charges against First American for their role in the data breach is the first time the NYDFS is enforcing the department’s cybersecurity regulations established in 2017. The regulation requires financial organizations with a license to operate in New York to establish and follow a comprehensive cybersecurity policy, provide training for all employees, implement effective access controls, and conduct regular venerability tests in line with a cybersecurity risk assessment.
First American is facing 6 charges, including failing to follow their internal cybersecurity policy, misclassifying the exposed documents as “low” severity, as well as failing to investigate and report the breach in a timely manner.
While the fine for a violation of the regulation is only up to $1,000, the NYDFS considers each exposed document as a separate violation. So, with up to 885 million records potentially exposed, First American could be looking at millions of dollars in fines if the charges stick.
News of the charges should serve as a wake-up call to U.S. organizations unconcerned with cybersecurity regulations. While the U.S. does not have any federal regulations, and there are a number of state regulations that have gone into effect in the past 5 years. This is merely one of what is likely many companies that will face enforcement unless they take steps now to ensure compliance.
by Doug Kreitzberg | Jul 22, 2020 | Compliance, GDPR, Regulations
Last week the top court in the European Union found that Privacy Shield, the framework used to transfer data between the E.U. and the U.S., does not sufficiently protect the privacy of E.U. citizens. and is therefore invalid. The courts decision has left many businesses scrambling and throws the difference between E.U and U.S. privacy standards in stark relief.
Privacy Shield was a data sharing framework enacted by the E.U. courts in 2015. Since then, however, the E.U. established the General Data Protection Regulation (GDPR) three years later, which places stricter privacy requirements when processing the data of E.U. citizens. According to the Washington Post, over 5,300 companies — including Facebook, Google, Twitter, and Amazon — that signed up to use the Privacy Shield framework now need to find a new way to handle the data of E.U. citizens in the United States.
The court made their decision after privacy expert Max Schrems filed a complaint against Facebook for violating his privacy rights under the GDPR once Facebook moved his information to the U.S. for processing. While the GDPR does allow the data of E.U. citizens to be transferred to other countries, that data must continue to comply with the GDPR standards after it is transfer. The problem with Privacy Shield, according to the E.U. decision, is that the U.S. government has wide-reaching access to personal data stored in the United States. And while the E.U. acknowledges that government authorities may access personal information when necessary for public security, the courts ultimately found that the U.S. does not meet the requirements of the GDPR “in so far as the surveillance programmes…. are not limited to what is strictly necessary.”
This decision starkly highlights the differences not only in E.U. and U.S. privacy regulations but also the privacy standards used in surveillance activities. In a statement to the Washington Post, Schrems said, “The court clarified…that there is a clash of E.U. privacy law and U.S. surveillance law. As the E.U. will not change its fundamental rights to please the [National Security Agency], the only way to overcome this clash is for the U.S. to introduce solid privacy rights for all people — including foreigners….Surveillance reform thereby becomes crucial for the business interests of Silicon Valley.”
Moving forward, U.S. companies processing E.U. citizen data will either need to keep that data on servers within the E.U. or use standard contractual clauses (SCCs). SCCS are legally agreements created by individual organizations that cover how data is used. Of course, any SCCs will need to be compliant with the GDPR.
Time will tell exactly how this ruling will affect U.S. businesses with data from E.U. citizens, but this is only one of many example that the E.U. is taking consumer privacy extremely seriously. All businesses that have users within the U.S., large or small, should therefore carefully assess their privacy practices and ensure it is in line with the GDPR. At the end of the day, it’s better that have a privacy policy that is stricter than it needs to be than to scramble at the last second when the E.U. makes a new ruling like they did last week.
by Doug Kreitzberg | Jul 8, 2020 | Compliance, Human Factor, Regulations
The good news: Many companies these days are using cybersecurity controls and security training for their employees. The bad news: A lot of these businesses are putting in the place the bare minimum in order to meet compliance requirements. The truth is, however, the you can be compliant but not secure. Remember the big Target breach in 2013? Hackers were able to take the debit and credit card information of millions are shoppers by accessing Target point-of-sale systems. The irony is that, just months before the attack, Target was certified PCI compliant. In the words of then-CEO Gregg Steinhafel, “Target was certified as meeting the standard for the payment card industry in September 2013. Nonetheless, we suffered a data breach.” Simply put: Target was compliant but not secure.
Creating a Culture
If your security awareness program is a “check the box” compliance program, you can bet your employees are going through the same motions as you are. How has that improved your security posture? It hasn’t. Instead, creating a strong security program is first and foremost about creating a culture around security. And this has to start at the top, with your executive officers and your board. If business leaders set a security-focused tone, then employees will likely follow suit.
The reason a business can be compliant and not secure is because cybersecurity isn’t a one and done deal. Compliance is a state, cybersecurity is an ongoing process that involves the entire organization — from the boardroom to the cubicle. Verizon Data Breach Investigation Report shows that the human factor is the largest factor leading to breaches today. If that’s the case, perhaps instead of checking off the boxes and before investing in that new machine learning intrusion detection gizmo, consider focusing on human learning, engagement and the behaviors that can drive a mindful security culture.