Phish Scales: Weighing Your Risk

Phish Scales: Weighing Your Risk

With phishing campaigns now the #1 cause of successful breaches, it’s no wonder more and more businesses are investing in phish simulations and cybersecurity awareness programs. These programs are designed to strengthen the biggest vulnerability every business has and that can’t be fixed  through technological means: the human factor. One common misconception that may employers have, however, is that these programs should result in a systematic reduction of phish clicks over time. After all, what is the point of investing in phish simulations if your employees aren’t clicking on less phish? Well, a recent report from The National Institute of Standards and Technology actually makes the opposite argument. Phish come in all shapes and sizes; some are easy to catch while others are far more cunning. So, if your awareness program only focuses on phish that are easy to spot or are contextually irrelevant to the business, then a low phish click rate could lead to a false sense of of security, leaving employee’s unprepared for more crafty phishing campaigns. It’s therefore important that phish simulations present a range of difficulty, and that’s where the phish scale come in.

Weighing Your Phish

If phish simulations vary the difficulty of their phish, then employers should expect their phish click rates to vary as well. The problem is that this makes it hard to measure the effectiveness of the training. NIST therefore introduced the phish scale as a way to rate the difficulty of any given phish and weigh that difficulty when reporting the results of phish simulations. The scale focuses on two main factors:

#1 Cues

The first factor included in the phish scale is the number of “cues” contained in a phish. A cue is anything within the email that one can look for to determine if it is real of not. Cues include anything from technical indicators, such as suspicious attachments or an email address that is different from the sender display name, to the type of content the email uses, such as an overly urgent tone or spelling and grammar mistakes. The idea is that the less cues a phish contains, the more difficult it will be to spot.

#2 Premise Alignment

The second factor in the phish scale is also the one that has a stronger influence on the difficulty of a phish. Essentially, premise alignment has to do with how accurately the content of the email aligns with what an employee expects or is used to seeing in their inbox. If a phish containing a fake unpaid invoice is sent to an employee who does data entry, for example, that employee is more likely to spot it than someone in accounting. Alternatively, a phish targeting the education sector is not going to be very successful if it is sent to a marketing firm. In general, the more a phish fits the context of a business and the employee’s role, the harder it will be to detect.

Managing Risk and Preparing for the Future

The importance of the phish scale is more than just helping businesses understand why phish click rates will vary. Instead, understanding how the difficulty of a phish effects factors such as response times and report rates will deepen the reporting of phish simulations, and ultimately give organizations a more accurate view of their phish risk. In turn, this will also influence an organization’s broader security risk profile and strengthen their ability to respond to those risks.

The phish scale can also play an important role in the evolving landscape of social engineering attacks. As email filtering systems become more advanced, phishing attacks may lessen over time. But that will only lead to new forms of social engineering across different platforms. NIST therefore hopes that the work done with the phish scale can also help manage responses to these threats as they emerge.

When is Cyber Cyber? Insurance Coverage in Flux

When is Cyber Cyber? Insurance Coverage in Flux

The fear of experiencing a cyberattack is rightfully keeping businesses owners up at night. Not only would a cyber attack give your security team a headache , but could have profound and irreversible financial implications for your businesses. In fact, according to a report by IBM and the Ponemon Institute, the average cost of a data breach in the U.S. is a over $8 million. And with 30% of companies expected to experience a breach within 24 months, it’s no surprise that business are seeking coverage. The problem, however, is that businesses and insurance companies alike are still grappling over exactly what is and is not covered when a cyber event occurs.

Some businesses are learning this the hard way

Recently, a phishing campaign successfully stole the credentials of an employee at a rent-servicing company that allows tenants to pay their rent online. The phishers used the employee’s credentials to take $10 million in rent money that the company owed to landlords. The company had a crime insurance policy that covered losses “resulting directly from the use of any computer to fraudulently cause a transfer,” but soon found out their claim was denied. Among the reasons the insurer gave for denying the claim was that, because the funds stolen were owed to landlords, the company did not technically suffer any first-party losses and there were not covered by the insurance policy.

In another case, the pharmaceutical company Merck found itself victim to a ransomware attack that shut down more than 30,000 of their computers and 7,500 servers. The attack took weeks to resolve and Merck is now claiming $1.3 billion in losses that they believe should be covered by their property policy. The problem, however, is that the attack on Merck was actually a by-product of a malware campaign that the Russian government was waging against Ukraine and happened to spread to companies in other countries. The insurer therefore denied the claim, stating their property coverage excludes any incidents considered an “act of war.”

Silence is Deadly

The Merck example above also illustrates the concept of “silent”, or “non-affirmative” cyber. Basically these are standard insurance lines, like property or crime, in which cyber acts have not been specifically included or excluded.  Merck was filing the claims against the property policy because it sustained data loss, system loss and business interruption losses. Silent cyber is difficult for a carrier to respond to (which is why the carrier in this case is looking to the war and terrorism exclusion to deny coverage) and even more challenging to account for.  That’s one reason both carriers and businesses are looking to standalone cyber insurance, which provides both the insured and carrier with a lot more clarity as to what is covered.  (Although, carriers can deny coverage in situations where the attestations about the quality of security up front do not measure up at claim time.)

Predicting the Unpredictable

It’s commonly said that insurers will do anything to avoid paying out claims, but the issue with cyber insurance coverage goes much deeper. Instead, the problem centers around a number of uncertainties involved in categorizing and quantifying cyber risk that makes comprehensive policy writing a near impossible task. For one, cyber insurance is a new market dealing with a relatively new problem. There are therefore not as many data points for insurers to accurately quantify risk as there are for long-standing forms of insurance.

The real problem, however, is that cyber incidents are extremely difficult to predict and reliably account for. Whereas health and natural disaster policies, for example,  are based on scientific modeling that allows for a certain degree of stability in risk factors, it is much harder for insurance companies to predict when, where, and how a cyber attack might happen. Even Warren Buffett told investors that anyone who says they have a firm grasp on cyber risk “is kidding themselves.”

Reading the Fine Print

It’s important to understand that, despite the relatively unpredictable nature of cyber incidents, there are plenty of steps businesses can and should take to understand and mitigate their risk profile. Organizations with robust risk management practices can significantly reduce their vulnerability and a strong security posture goes along way towards minimizing their risks and providing a strong defense when a claim strikes.

Unfortunately, this puts a lot of the responsibility on individual businesses when evaluating their cyber exposures and the insurance coverages which might be available to respond.   A good insurance broker who has expertise in cyber is essential.  Much like the threat landscape, cyber insurance coverage is constantly evolving, and it is to all parties, from businesses to carriers, to keep up.

The Human Factor of Cyber Threats

The Human Factor of Cyber Threats

We’re number one! (Oh, that’s not a good thing?)

Yes, sometimes it’s better not to be recognized.  Especially if it’s in the Verizon 2020  Data Breach Investigations Report which shows new and emerging trends of the cyber threat landscape.  Anyone who is anyone in cyber wants to get their hands on it as soon as it’s published (and we are no exception).   As has been for many years, one of the key reasons behind data breaches involves what we do (or don’t do).  In fact, this year’s report shows that 3 out of the top 5 threat actions that lead to a breach involve human’s either making mistakes or being tricked. Below is a closer look at those 3 threat actions, and the human factors they rely on.

1. Phishing

In this year’s report, phishing attacks lead the cyber threat pack for successful breaches. It it also the most common form of social engineering used today, making up 80% of all cases. A phish attacker doesn’t need to rely on a lot of complicated technical know-how to steal information from their victims. Instead, phishing is a cyber threat that relies exclusively on manipulating people’s emotions and critical thinking skills to trick them into believing the email they are looking at is legitimate.

2. Misdelivery

One surprising aspect of the report is the rise of misdelivery as a cause of data breaches. This is a different kind of human factored cyber threat: the pure and simple error.  And there is nothing very complicated about it: someone within the organization will accidentally send sensitive documents or emails to the wrong person. While this may seem like a small mistake, the impact can be great, especially for industries handling highly sensitive information, such as healthcare and financial services.

3. Misconfiguration

Misconfigurations as a cause of data breaches is also on the rise, up nearly 5% from the previous year. Misconfigurations cover everything security personnel not setting up cloud storage properly, undefined access restrictions, or even something as simple as a disabled firewall. While this form of cyber threat involves technological tools, the issues is first and foremost with the errors made by those within an organization. Simply put, if a device, network, or database is not properly configured, the chances of a data breach sky rocket.

So What’s to Stop Us?

By and large we all understand the dangers cyber threats pose to our organizations, and the amount of tools available to defend against these threats are ever-increasing  And yet, while there is now more technology to stop the intruders, at the end of the day it still comes down to the decisions we make and the behaviors we have (and which are often used against us).

We know a few things:  compliance “check the box” training doesn’t work (but you knew that already); “gotcha” training once you accidentally click on a simulated phish doesn’t work because punitive reinforcement rarely creates sustained behavior change; the IT department being the only group talking about security doesn’t work because that’s what they always talk about (if not blockchain).

Ugh.  So what might work?  If you want to have sustained cybersecurity behavior change, three things + one need to occur:  1) you need to be clear regarding the behaviors you want to see; 2) you need to make it easy for people to do; 3) you need people to feel successful doing it.  And the “+ one” is that leadership needs to be doing and talking the same thing.  In other words, the behaviors need to become part of the organizational culture and value structure.

If we design the behaviors we want and put them into practice, we can stop being number one.  At least as far as Verizon is concerned.

If You Want Risk Management To Stick, You Have To Stay Positive

If You Want Risk Management To Stick, You Have To Stay Positive

Remember the sales contest from the movie, Glengarry Glen Ross?

“First prize is a Cadillac Eldorado….Third prize is you’re fired.”

We seem to think that, in order to motivate people, we need both a carrot and stick. Reward or punishment.  And yet, if we want people to change behaviors on a sustained basis, there’s only one method that works: the carrot.

One core concept I learned while applying behavior-design practices to cyber security awareness programming was that, if you want sustained behavior change (such as reducing phish susceptibility), you need to design behaviors that make people feel positive about themselves.

The importance of positive reinforcement is one of the main components of the model developed by BJ Fogg, the founder and director of Stanford’s Behavior Design Lab. Fogg discovered that behavior happens when three elements – motivation, ability, and a prompt – come together at the same moment. If any element is missing, behavior won’t occur.

I worked in collaboration with one of Fogg’s behavior-design consulting groups to bring these principles to cyber security awareness. We found that, in order to change digital behaviors and enhance a healthy cyber security posture, you need to help people feel successful. And you need the behavior to be easy to do, because you cannot assume the employee’s motivation is high.

Our program is therefore based on positive reinforcement when a user correctly reports a phish and is combined with daily exposure to cyber security awareness concepts through interactive lessons that only take 4 minutes a day.

To learn more about our work, you can read Stanford’s Peace Innovation Lab article about the project.

The upshot is behavior-design concepts like these will not only help drive change for better cyber security awareness; they can drive change for all of your other risk management programs too.

There are many facets to the behavior design process, but if you focus on these two things (BJ Fogg’s Maxims) your risk management program stands to be in a better position to drive the type of change you’re looking for:

1) help people feel good about themselves and their work

2) promote behaviors that they’ll actually want to do

After all, I want you to feel successful, too.

1 in 5 Small Businesses Unprepared for Ransomware

1 in 5 Small Businesses Unprepared for Ransomware

In October, the FBI warned that ransomware attacks are becoming “more targeted, sophisticated, and costly.” Now, a new survey shows that small business are baring the brunt of these attacks, with 46% reporting that they have been targeted.

Ransomware is a form of cyber attack in which the attacker steals or encrypts the victim’s data and demands payment in order to regain access to that data. The new survey highlights two issues that small businesses in particular at a high risk for further attacks and even irreparable data loss.

1. No Data Protection in Place

Perhaps the most troubling trend the survey found is that 20% of small business do not have data protection systems in place. Using solutions such as data backup or disaster recover tools are essential for a variety of potential issues, but especially for ransomware. According to Russell P. Reeder, the CEO of the company behind survey, “every modern company depends on data and operational uptime for its very survival…Data protection and operational uptime have never been more important than during the unprecedented times we are currently facing.”

With a strong backup system that is tested regularly, small businesses faced with a ransomware attack are in better position to recover their data without succumbing to the demands of the attackers. Without proper data protection systems in place, however, businesses are left in the hands of the bad guy, with no other means to recover their data. And the truth is, the more small businesses that leave themselves unprotected, the more they will be targeted. Ransomware attackers are looking for easy money, and are therefore far more likely to target those who leave themselves the most vulnerable.

2. To Pay or Not to Pay

The survey also found that a whooping 73% of small businesses targeted by ransomware opted to pay the ransom in order to get their data back. One reason for this is that, if a business does not have proper data protection in place, the cost to restore data may end up being more costly than simply paying the bad guys. However, this solution is misguided on a number of fronts.

First of all, there is no guarantee that paying the ransom will result in regaining all or even any of the data stolen. The survey found that 17% of those who paid the ransom did not recover all of their data.  Secondly, paying the ransom is a short-term solution to a long-term problem. Paying the ransom signals to attackers that they can squeeze money out of that business in the future. Reporting by ProPublica also found ransomware payments were substantially lower than they are now, and that the number of businesses willing to cough up the dough has led to an increase in the price of the ransom.

Prevent and Defend

In order to defend against ransomware attacks, small businesses should first and foremost ensure they have strong data protection solutions in place. However, this is only one piece of the puzzle. Taking measures such as awareness training can help prevent these attacks in the first place. Ransomware attackers often gain access to systems through malware installed via phishing campaigns. If you and your staff are properly trained to spot deceptive practices, you already have a leg up on the bad guys. Attackers also hope that their victims will panic and make rash decisions. There is no question that falling victim to ransomware is scary stuff, but taking a few breaths, reviewing your options, and responding rationally might help keep your money and data in your hands and prevent further attacks from taking place in the future.

How Notifications are Re-Wiring Our Brains

How Notifications are Re-Wiring Our Brains

“How prone to doubt, how cautious are the wise!”
― Homer

We’ve written before about how hackers and online scammers rely on human factors just as much as technological factors. They attempt to manipulate our emotions in order to trick us into handing over information or even money. However, the problem of social engineering goes beyond these tactics used by scammers. We’ve all experienced the anxious rush to check our notifications as soon as they come in. But these aren’t just simple habits we’ve developed —  our phones, and especially notifications, are literally re-wiring how our brains work and even dulling our critical thinking skills.

Ever heard of Pavlov’s dog? It was an experiment conducted by the physiologist Ivan Pavlov in which he rang a bell when presenting food to a dog. Upon seeing the food, the dog naturally began to salivate. After awhile, however, Pavlov rang the bell without giving the dog any food and found that the dog began to salivate based on the sound of the bell alone, effectively re-wiring how the dog’s brain responds to certain sounds. Well, this type of conditioned response is also exactly what our phone notifications are doing to us. The ping we hear when a text or email pops up on our phone acts as a trigger for our brain to release pleasure-seeking chemicals such as dopamine. According to behavioral psychologist Susan Weinschenk, this sets us on an endless dopamine loop: “Dopamine starts you seeking, then you get rewarded for the seeking, which makes you seek more. It becomes harder and harder to stop looking at email, stop texting, or stop checking your cell phone to see if you have a message or a new text.”

However, the way that notifications re-wire our brains goes beyond the endless search for more and more messages. The pleasure-seeking response that dopamine triggers can actually lower our ability to think critically, making us more susceptible to online scams. According to research conducted by The University of Florida and Google, the cognitive effects notifications have on us can lower our decision-making ability. The research found that we are more likely to detect a scam when we are stressed and on high alert. However, hormones like dopamine that are pleasure-based lower our level of alertness and make us less likely to detect potential scams. This is especially troublesome when it comes to phishing emails. Emails notifications release these “feeling good” chemicals which in turn makes it harder for us to discern if what we’re looking at is a fake.

There are, however, some steps we can take to combat this. If notifications are re-wiring our brains to be less alert, one step we can take is to simply turn off all notifications. This can limit the dopamine release that notifications trigger. Taking  a few breaths before opening an email also helps. Pausing before responding to a notification can help break the “dopamine loop” by delaying the gratification cycle. Whatever method works best is up to you. The important thing is to be aware of how you respond to things like notifications. Taking the extra few seconds to think about what you’re doing and why might just save you from falling for a phish or other online scams.

Disinformation in the COVID Age

Disinformation in the COVID Age

The dangers of online disinformation is by now common knowledge, but that hasn’t seemed to stop its spread. The current COVID-19 crisis has highlighted both the pervasiveness of disinformation and the danger it poses to society. We are in a situation where we need to rely on information for our health and safety. Yet, when accurate and false information sit side-by-side online, it is extremely difficult to know what to trust. The Director-General of the World Health Organization recognized this problem as early as February when he said that, alongside the pandemic, we are also fighting an “infodemic.” From articles, videos, and tweets discounting the severity of the virus to full-blown conspiracy theories, COVID-19 disinformation is everywhere.

Despite the steps social media sites have taken  to combat disinformation about COVID-19, an Oxford study found that 88% of all false or misleading information about the coronavirus appear on social media sites. Another report found that, out of over 49 million tweets about the virus, nearly 38% contained misleading or manipulated content. The reason is largely because social media sites like Twitter and Facebook are trying to put a Band-Aid on a systemic issue. “They’ve built this whole ecosystem that is all about engagement, allows viral spread, and hasn’t ever put any currency on accuracy,” said Carl Bergstrom, a Professor at the University of Washington. Simply put, the root of disinformation is not just based on the content being shared, but also on the deep-seated practices used by social media to keep users engaged.

How Social Media Platforms Can Fix This

A new report by The German Marshall Fund takes the problem of disinformation head on and outlines what social media platforms can do to combat the problem and foster reliable and accurate reporting. Here are just a few of the steps the report recommends:

Design With “Light Patterns”

Websites and social media platforms often use “dark pattern” interfaces and defaults to manipulate users and hide information about how the site operates. Light pattern design, therefore, involves transparency about how the site operates. This involves using defaults that favor transparence, and even using labeling to shows the source of information, whether the account posting the content is verified or not, and even if audio and visual content has been altered.

Consistent Enforcement of Terms of Use

While all social media platforms have in-depth rules for user activity, these terms are generally inconsistently applied and enforced. By setting a transparent standard and consistently enforcing that standard, social media platforms can more successfully combat disinformation and other toxic online activity.

Independent Accountability

Instead of using government policy to regulate content, the U.S. should set up a technology-neutral agency to hold platforms accountable for a code of conduct focused on practices such as light pattern designs. By focusing on overseeing positive platform practices, the government can avoid having a hand in decisions about what content is “good” or “bad.”

What You Can Do Now

However helpful these changes to social media platforms are, the truth is we aren’t there yet. Fake and fiction stand side by side online, with no immediate way to discern which is which. When taking in information, it is up to you to figure out what is reputable and what is inaccurate. With the large amount of COVID-19 disinformation swarming the internet, its more important than ever to use our critical skills in two specific ways.

Be Self-Critical

Our personal world views, biases, and emotions shape how we take in information. When looking at content online, it’s important to think about your own motivations for believing something to be true or not. Ask yourself why you think something is true or false. Is it largely because you want to believe it or disbelieve it? When we read something online that makes us angry, there is something satisfying about sharing that anger with others. Before sharing content, ask whether your desire to share it is an emotional response or because the information is accurate and important. If it’s predominately coming from your emotions, reconsider if it’s worth sharing.

Be Critical of All Content

In general, we should initially read everything with a degree of skepticism. Doubt everything and be your own fact checker. Look at other websites reporting the same information. Are any of them reliable? Are they all citing the same sources, and, if so, is that source reputable? Don’t share an article based solely on the headline. Read the full article to understand if the headline is based on fact or is just speculation. Look at what sort of language the article is using. Is it largely opinion based? Does it cite reputable sources? Is it written in a way that is meant to evoke an emotional response?

 

Months into the COVID-19 pandemic, we understand how our in-person interactions can have a negative impact on ourselves and those around us, but it’s important to also understand how our interactions online can lead to similar  outcomes. Given the stupefying amount of disinformation about the coronavirus circulating online, it’s more important now than ever to be think critically about what information you’re consuming and be aware about what you say and share online.

Is This Your Cybersecurity Team dealing with WFH?

Is This Your Cybersecurity Team dealing with WFH?

Your organization’s cybersecurity team is on edge in the best of times. The bad guys are always out there and, like offensive lineman in American Football who are only noticed when they commit a penalty, cybersecurity personal are usually noticed only when something goes wrong. Now, as the game has changed, the quick transition to work from home, combined with the plethora of COVID-19 scams, phishing, and malware drowning the cybersecurity threat intel sources—not to mention the isolation—may leave your team at a chronically high stress level. And cybersecurity is far more than just your technical safeguards. At the end of the day, the stress your team feels could lead them to put their focus in the wrong place and let their guard down. 

Here’s what you can do about it

  1. Incorporate cybersecurity as a part of your overall business strategy process – now is the time to recognize cybersecurity as a key part of the organization’s strategy and that enables you to drive your mission forward.
  2. Be a part of the cybersecurity planning process – be active, listen, and understand how your team is handling this.
  3. Leverage your bully pulpit – communicate to the staff about the key areas your cybersecurity team is focused on and the role they are playing to keep the organization secure while everyone is working from home.
  4. Check in – take the time to just check in and see how they are doing. A little goes a long way.

The truth is, when it comes to cybersecurity, your first and most effective line of defense is not your firewall or encryption protocol. It’s the people that form a team dedicated to protecting your organization. Working from home poses unique cybersecurity challenges, and it’s up to you to make sure your team is given the attention they need to do their job well.

 

Contact Tracing Technology Raises Privacy Concerns

Contact Tracing Technology Raises Privacy Concerns

As the COVID-19 pandemic continues, the world has turned to the tech industry to help mitigate the spread of the virus and, eventually, help transition out of lockdown. Earlier this month, Apple and Google announced that they are working together to build contact-tracing technology that will automatically notify users if they have been in proximity to someone who has tested positive for COVID-19. However, reports show that there is a severe lack of evidence to show that these technologies can accurately report infection data. Additionally, the question arises as to the efficacy of these types of apps to effectively assist the marginal populations where the disease seems to have the largest impact.  Combined with the invasion of privacy that this involves, the U.S. needs to more seriously interrogate whether or not the potential rewards of app-based contact tracing outweigh the obvious—and potentially long term— risks involved.

First among the concerns is the potential for the information collected to be used to identify and target individuals. For example, in South Korea, some have used the information collected through digital contact tracing to dox and harass infected individuals online. Some experts fear that the collected data could also be used as a surveillance system to restrict people’s movement through monitored quarantine, “effectively subjecting them to home confinement without trial, appeal or any semblance of due process.” Such tactics have already been used in Israel.

Apple and Google have taken some steps to mitigate the concerns over privacy, claiming they are developing their contact tracing tools with user privacy in mind. According to Apple, the tool will be opt-in, meaning contact tracing is turned off by default on all phones. They have also enhanced their encryption technology to ensure that any information collected by the tool cannot be used to identify users, and promise to dismantle the entire system once the crisis is over.

Risk

Apple and Google are not using the phrase “contact tracing” for their tool, instead branding it as “exposure notification.” However, changing the name to sound less invasive doesn’t do anything to ensure privacy. And despite the steps Apple and Google are taking to make their tool more private, there are still serious short and long term privacy risks involved.

In a letter sent to Apple and Google, Senator Josh Hawley warns that the impact this technology could have on privacy “raises serious concern.” Despite the steps the companies have taken to anonymize the data, Senator Hawley points out that by comparing de-identified data with other data sets, individuals can be re-identified with ease. This could potentially create an “extraordinarily precise mechanism for surveillance.”

Senator Hawley also questions Apple and Google’s commitment to delete the program after the crisis comes to an end. Many privacy experts have echoed these concerns, worrying what impact these expanded surveillance systems will have in the long term. There is plenty of precedent to suggest that relaxing privacy expectations now will change individual rights far into the future. The “temporary” surveillance program enacted after 9/11, for example, is still in effect today and was even renewed last month by the Senate.

Reward?

Contact tracing is often heralded as a successful method to limit the spread of a virus. However, a review published by a UK-based research institute shows that there is simply not enough evidence to be confident in the effectiveness of using technology to conduct contact tracing. The report highlights the technical limitations involved in accurately detecting contact and distance. Because of these limitations, this technology might lead to a high number of false positives and negatives. What’s more, app-based contact tracing is inherently vulnerable to fraud and cyberattack. The report specifically worries about the potential for “people using multiple devices, false reports of infection, [and] denial of service attacks by adversarial actors.”

Technical limitations aside, the effectiveness of digital contact tracing also requires both large compliance rate  and a high level of public trust and confidence in this technology. Nothing suggests Apple and Google can guarantee either of these requirements. The lack of evidence showing the effectiveness of digital contact tracing puts into question the use of such technology at the cost serious privacy risks to individuals.

If we want to appropriately engage technology, we should determine the scope of the problem with an eye towards assisting the most vulnerable populations first and at the same time ensure that the perceived outcomes can be obtained in a privacy perserving manner.  Governments need to lay out strict plans for oversight and regulation, coupled with independent review. Before comprising individual rights and privacy, the U.S. needs to thoroughly asses the effectiveness of this technology while implementing strict and enforceable safeguards to limit the scope and length of the program. Absent that, any further intrusion into our lives, especially if the technology is not effective, will be irreversible. In this case, the cure may well be worse than the disease.

COVID-19 Loan Breach Exposes 8,000 Applicants

COVID-19 Loan Breach Exposes 8,000 Applicants

This week, reports surfaced that the Small Business Association’s COVID-19 loan program experienced an unintentional data breach last month, leaving the personal information of up to 8,000 applicants temporarily exposed. This is just the latest in a long line of COVID-19 cyber-attacks and exposures since the pandemic began.

The effected program is the SBA’s long-standing Economic Injury Disaster Loan program (EIDL), which congress recently expanded to help small businesses effected by the COVID-19 crisis. The EIDL is separate from the new Paycheck Protection Program, which is also run by the SBA.

According to a letter sent to affected applicants, on March 25th the SBA discovered that the application system exposed personal information to other applicants using the system. The information potentially exposed include names, addresses, phone numbers, birth dates, email addresses, citizenship status, insurance information, and even social security numbers of applicants

According to the SBA, upon discovering the issues they “immediately disabled the impacted portion of the website, addressed the issue, and relaunched the application portal.” All businesses affected by the COVID-19 loan program breach were eventually notified by the SBA and offered a year of free credit monitoring.

A number of recent examples show that the severe economic impact of the pandemic has left the SBA scrambling. Typically, the SBA is meant to issue funds within three days of receiving an application. However, with more than 3 million applications flooding in, some have had to wait weeks for relief.

The unprecedented number of applications filed, coupled with the fact the SBA is smallest major federal agency —  suffering a 11% funding cut in the last budget proposal — likely contributed to the accidental exposure of applicant data. However, whether accidental or not, a data breach is still a data breach. It’s important that all organizations take the time to ensure their systems and data remain secure, and that mistakes do not lead to more work and confusing during a time of crisis.

Introducing PhishMarket,

Click here for a new way to secure your most valuable asset— your employees.

Not Ready to Commit?

Subscribe To Our Newsletter

Join our mailing list to receive the latest tips and news about cyber security and data privacy

Learn More About Cyber Awareness

You have successfully Subscribed! Please make sure to check your email to confirm registration.