This week, Canada announced that, along with Microsoft and the Alliance for Securing Democracy, they will be leading an initiative to counter election interference as outlined in the Paris Call for Trust and Security in Cyberspace. The Paris Call is an international agreement outlining steps to establish universal norms for cybersecurity and privacy. The agreement has now been signed by over 550 entities, including 95 countries and hundreds of nonprofits, universities, and corporations. Nations such as Russia, China, and Israel did not sign the agreement, but one country’s absence is particularly notable—the U.S.
While the Paris Call is largely symbolic, with no legally-binding standards, it does outline 9 principles that the agreement commits to uphold and promote. Among these principles are the protection of individuals and infrastructure from cyber attack, the defense of intellectual property, and the defense of election from interference.
Non-Government Entities are Governing Cybersecurity Norms
Despite the U.S.’s absence from the agreement, many of the United States’ largest tech companies signed the agreement, such as IBM, Facebook, and Google. In addition, Microsoft says it worked especially close with the French government to write the Paris Call. The inclusion of private organizations in the agreement is a sign of the increasing importance of non-governmental entities in shaping and enforcing cybersecurity practices. The fact that Microsoft—and not the U.S.—is taking a lead on the agreement’s principle to counter election inference is a particularly strong example of how private companies are shaping the relationship between technology and democracy.
A Flawed Step, But a Step Nonetheless
Some organizations that signed the agreement, however, remain wary of private influence and how it might affect some of the principles of the Paris Call. Access Now, a non-profit dedicated to a free and open internet, raised concerns about how the agreement might give too much authority to private companies. One of the agreement’s principles, for example, encourages stakeholders to cooperate to address cyber criminality, which Access Now worries could be interpreted as a relaxing of judicial standards that would allow for an “informal exchange of data” between companies and government agencies. The non-profit also worries the principle concerning the protection of intellectual property could lead to a “heavy-handed approach,” by both private and public entities, “that could limit the flow of information online and risk freedom of expression and the right to privacy.”
On the opposite side, others have argued that the principles are more fluff than substance, fairy tales without specificity and accountability.
That being said, Paris Call is at the very least an acknowledgment that, similar to climate change, our global reliance on technology requires policy coordination on a global scale, involving not only nations, but the technology companies that are helping define our future, as well. After all, it’s hard to imagine solving any global issue without a coordinated technology supporting us. Paris Call may not be the right answer, but we probably should pick up and be part of the conversation.
With phishing campaigns now the #1 cause of successful breaches, it’s no wonder more and more businesses are investing in phish simulations and cybersecurity awareness programs. These programs are designed to strengthen the biggest vulnerability every business has and that can’t be fixed through technological means: the human factor. One common misconception that may employers have, however, is that these programs should result in a systematic reduction of phish clicks over time. After all, what is the point of investing in phish simulations if your employees aren’t clicking on less phish? Well, a recent report from The National Institute of Standards and Technology actually makes the opposite argument. Phish come in all shapes and sizes; some are easy to catch while others are far more cunning. So, if your awareness program only focuses on phish that are easy to spot or are contextually irrelevant to the business, then a low phish click rate could lead to a false sense of of security, leaving employee’s unprepared for more crafty phishing campaigns. It’s therefore important that phish simulations present a range of difficulty, and that’s where the phish scale come in.
Weighing Your Phish
If phish simulations vary the difficulty of their phish, then employers should expect their phish click rates to vary as well. The problem is that this makes it hard to measure the effectiveness of the training. NIST therefore introduced the phish scale as a way to rate the difficulty of any given phish and weigh that difficulty when reporting the results of phish simulations. The scale focuses on two main factors:
The first factor included in the phish scale is the number of “cues” contained in a phish. A cue is anything within the email that one can look for to determine if it is real of not. Cues include anything from technical indicators, such as suspicious attachments or an email address that is different from the sender display name, to the type of content the email uses, such as an overly urgent tone or spelling and grammar mistakes. The idea is that the less cues a phish contains, the more difficult it will be to spot.
#2 Premise Alignment
The second factor in the phish scale is also the one that has a stronger influence on the difficulty of a phish. Essentially, premise alignment has to do with how accurately the content of the email aligns with what an employee expects or is used to seeing in their inbox. If a phish containing a fake unpaid invoice is sent to an employee who does data entry, for example, that employee is more likely to spot it than someone in accounting. Alternatively, a phish targeting the education sector is not going to be very successful if it is sent to a marketing firm. In general, the more a phish fits the context of a business and the employee’s role, the harder it will be to detect.
Managing Risk and Preparing for the Future
The importance of the phish scale is more than just helping businesses understand why phish click rates will vary. Instead, understanding how the difficulty of a phish effects factors such as response times and report rates will deepen the reporting of phish simulations, and ultimately give organizations a more accurate view of their phish risk. In turn, this will also influence an organization’s broader security risk profile and strengthen their ability to respond to those risks.
The phish scale can also play an important role in the evolving landscape of social engineering attacks. As email filtering systems become more advanced, phishing attacks may lessen over time. But that will only lead to new forms of social engineering across different platforms. NIST therefore hopes that the work done with the phish scale can also help manage responses to these threats as they emerge.
The fear of experiencing a cyberattack is rightfully keeping businesses owners up at night. Not only would a cyber attack give your security team a headache , but could have profound and irreversible financial implications for your businesses. In fact, according to a report by IBM and the Ponemon Institute, the average cost of a data breach in the U.S. is a over $8 million. And with 30% of companies expected to experience a breach within 24 months, it’s no surprise that business are seeking coverage. The problem, however, is that businesses and insurance companies alike are still grappling over exactly what is and is not covered when a cyber event occurs.
Some businesses are learning this the hard way
Recently, a phishing campaign successfully stole the credentials of an employee at a rent-servicing company that allows tenants to pay their rent online. The phishers used the employee’s credentials to take $10 million in rent money that the company owed to landlords. The company had a crime insurance policy that covered losses “resulting directly from the use of any computer to fraudulently cause a transfer,” but soon found out their claim was denied. Among the reasons the insurer gave for denying the claim was that, because the funds stolen were owed to landlords, the company did not technically suffer any first-party losses and there were not covered by the insurance policy.
In another case, the pharmaceutical company Merck found itself victim to a ransomware attack that shut down more than 30,000 of their computers and 7,500 servers. The attack took weeks to resolve and Merck is now claiming $1.3 billion in losses that they believe should be covered by their property policy. The problem, however, is that the attack on Merck was actually a by-product of a malware campaign that the Russian government was waging against Ukraine and happened to spread to companies in other countries. The insurer therefore denied the claim, stating their property coverage excludes any incidents considered an “act of war.”
Silence is Deadly
The Merck example above also illustrates the concept of “silent”, or “non-affirmative” cyber. Basically these are standard insurance lines, like property or crime, in which cyber acts have not been specifically included or excluded. Merck was filing the claims against the property policy because it sustained data loss, system loss and business interruption losses. Silent cyber is difficult for a carrier to respond to (which is why the carrier in this case is looking to the war and terrorism exclusion to deny coverage) and even more challenging to account for. That’s one reason both carriers and businesses are looking to standalone cyber insurance, which provides both the insured and carrier with a lot more clarity as to what is covered. (Although, carriers can deny coverage in situations where the attestations about the quality of security up front do not measure up at claim time.)
Predicting the Unpredictable
It’s commonly said that insurers will do anything to avoid paying out claims, but the issue with cyber insurance coverage goes much deeper. Instead, the problem centers around a number of uncertainties involved in categorizing and quantifying cyber risk that makes comprehensive policy writing a near impossible task. For one, cyber insurance is a new market dealing with a relatively new problem. There are therefore not as many data points for insurers to accurately quantify risk as there are for long-standing forms of insurance.
The real problem, however, is that cyber incidents are extremely difficult to predict and reliably account for. Whereas health and natural disaster policies, for example, are based on scientific modeling that allows for a certain degree of stability in risk factors, it is much harder for insurance companies to predict when, where, and how a cyber attack might happen. Even Warren Buffett told investors that anyone who says they have a firm grasp on cyber risk “is kidding themselves.”
Reading the Fine Print
It’s important to understand that, despite the relatively unpredictable nature of cyber incidents, there are plenty of steps businesses can and should take to understand and mitigate their risk profile. Organizations with robust risk management practices can significantly reduce their vulnerability and a strong security posture goes along way towards minimizing their risks and providing a strong defense when a claim strikes.
Unfortunately, this puts a lot of the responsibility on individual businesses when evaluating their cyber exposures and the insurance coverages which might be available to respond. A good insurance broker who has expertise in cyber is essential. Much like the threat landscape, cyber insurance coverage is constantly evolving, and it is to all parties, from businesses to carriers, to keep up.
Yes, sometimes it’s better not to be recognized. Especially if it’s in the Verizon 2020 Data Breach Investigations Report which shows new and emerging trends of the cyber threat landscape. Anyone who is anyone in cyber wants to get their hands on it as soon as it’s published (and we are no exception). As has been for many years, one of the key reasons behind data breaches involves what we do (or don’t do). In fact, this year’s report shows that 3 out of the top 5 threat actions that lead to a breach involve human’s either making mistakes or being tricked. Below is a closer look at those 3 threat actions, and the human factors they rely on.
In this year’s report, phishing attacks lead the cyber threat pack for successful breaches. It it also the most common form of social engineering used today, making up 80% of all cases. A phish attacker doesn’t need to rely on a lot of complicated technical know-how to steal information from their victims. Instead, phishing is a cyber threat that relies exclusively on manipulating people’s emotions and critical thinking skills to trick them into believing the email they are looking at is legitimate.
One surprising aspect of the report is the rise of misdelivery as a cause of data breaches. This is a different kind of human factored cyber threat: the pure and simple error. And there is nothing very complicated about it: someone within the organization will accidentally send sensitive documents or emails to the wrong person. While this may seem like a small mistake, the impact can be great, especially for industries handling highly sensitive information, such as healthcare and financial services.
Misconfigurations as a cause of data breaches is also on the rise, up nearly 5% from the previous year. Misconfigurations cover everything security personnel not setting up cloud storage properly, undefined access restrictions, or even something as simple as a disabled firewall. While this form of cyber threat involves technological tools, the issues is first and foremost with the errors made by those within an organization. Simply put, if a device, network, or database is not properly configured, the chances of a data breach sky rocket.
So What’s to Stop Us?
By and large we all understand the dangers cyber threats pose to our organizations, and the amount of tools available to defend against these threats are ever-increasing And yet, while there is now more technology to stop the intruders, at the end of the day it still comes down to the decisions we make and the behaviors we have (and which are often used against us).
We know a few things: compliance “check the box” training doesn’t work (but you knew that already); “gotcha” training once you accidentally click on a simulated phish doesn’t work because punitive reinforcement rarely creates sustained behavior change; the IT department being the only group talking about security doesn’t work because that’s what they always talk about (if not blockchain).
Ugh. So what might work? If you want to have sustained cybersecurity behavior change, three things + one need to occur: 1) you need to be clear regarding the behaviors you want to see; 2) you need to make it easy for people to do; 3) you need people to feel successful doing it. And the “+ one” is that leadership needs to be doing and talking the same thing. In other words, the behaviors need to become part of the organizational culture and value structure.
If we design the behaviors we want and put them into practice, we can stop being number one. At least as far as Verizon is concerned.
Remember the sales contest from the movie, Glengarry Glen Ross?
“First prize is a Cadillac Eldorado….Third prize is you’re fired.”
We seem to think that, in order to motivate people, we need both a carrot and stick. Reward or punishment. And yet, if we want people to change behaviors on a sustained basis, there’s only one method that works: the carrot.
One core concept I learned while applying behavior-design practices to cyber security awareness programming was that, if you want sustained behavior change (such as reducing phish susceptibility), you need to design behaviors that make people feel positive about themselves.
The importance of positive reinforcement is one of the main components of the model developed by BJ Fogg, the founder and director of Stanford’s Behavior Design Lab. Fogg discovered that behavior happens when three elements – motivation, ability, and a prompt – come together at the same moment. If any element is missing, behavior won’t occur.
I worked in collaboration with one of Fogg’s behavior-design consulting groups to bring these principles to cyber security awareness. We found that, in order to change digital behaviors and enhance a healthy cyber security posture, you need to help people feel successful. And you need the behavior to be easy to do, because you cannot assume the employee’s motivation is high.
Our program is therefore based on positive reinforcement when a user correctly reports a phish and is combined with daily exposure to cyber security awareness concepts through interactive lessons that only take 4 minutes a day.
The upshot is behavior-design concepts like these will not only help drive change for better cyber security awareness; they can drive change for all of your other risk management programs too.
There are many facets to the behavior design process, but if you focus on these two things (BJ Fogg’s Maxims) your risk management program stands to be in a better position to drive the type of change you’re looking for:
1) help people feel good about themselves and their work
2) promote behaviors that they’ll actually want to do
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.