Last week, IBM and The Ponemon Institute released their annual Cost of a Data Breach Report. For the past 15 years, the report has highlighted recurring and emerging factors that contribute to the cost of data data breaches, as well as the root causes of those breaches. One of the key findings in this year’s report is the fact that human factored cyber attacks not only make up a large percentage of the all malicious attacks, but also are incredibly costly to businesses that suffered breaches. This only confirms the importance of cyber awareness training for employees to limit the risk of a human factored attack.
There are many different causes of a data breach, some of which are merely accidental. However, according to this year’s report, malicious attacks now make up 52% of all breaches. This didn’t used to be the case. In fact, malicious attacks have seen a 24% growth rate in just six years. Malicious attacks are also the most expensive, costing businesses an average of $4.27 million. That’s nearly $1 million more than all other causes of a breach.
Given the frequency and cost of malicious attacks, it’s important to look closer at the different threats that account for the rise in malicious attacks — and the data is surprising. While expected threats such as system vulnerabilities and malicious insiders are certainly present, human factored cyber attacks take up a large chunk of all malicious attacks. Threats ranging from phishing attacks, to business email compromise, to social engineering and cloud misconfigurations are all rooted in human rather than technical vulnerability, and account for 41% of all malicious attacks leading to data breaches. Indeed this report correlates with what was presented in the Verizion 2020 Data Breach Investigations Report.
Human factored cyber attacks aren’t something you can protect yourself against strictly through technically safeguards. Instead protecting against these vulnerability requires working with employees, establish proper quality control protocols, ensuring your have the right expertise on your team and using cyber awareness training to help build safer online habits.
As a Fortune 100 CISO once told me, “at the end of the day, every cyber incident starts with someone making a decision.”
By now it is commonly understood that free online services such as social media, search engines, and emails are not actually that free. Instead, we use those services in exchange for data about who we are and what we want, which can then be used to show us highly targeted advertising or even just sold to third-party companies. Our very identities are now the most valuable object in the world and we give it to tech giants every single day.
That’s why there is a growing movement among some lawmakers to make companies pay consumers for the data they use. Data dividends, as it’s called, is now being pushed by politicians like Andrew Yang and California governor Gavin Newsom who argue that, by ensuring companies are paying users for their data, consumers will be empowered to take more control of their online identity and privacy.
The problem, however, is once you take a closer look at the concept Yang and Government Newsom are pushing, it becomes clear that this project, which is meant to promote privacy, ends up reinforcing a system that commodifies consumer data and disincentives privacy-first practices. We are treading a dangerous path if we attempt to monetize identity.
Here is why:
Paying consumers for their data doesn’t protect their privacy. Instead it ends up justifying the current practice of data mining that undermines the right to privacy. Certain companies are already using similar practices. Amazon, for example, offered $25 Amazon gift cards for full body 3D scans of their users. It’s a dramatic example, but fundamentally equivalent to what lawmakers are now proposing.
The concept of privacy is literally a human right and as such cannot be bought and sold in a free and open society. It’s like saying that companies can take away your right of free expression so long as they compensate you for it. Making money off of and sharing user data with third-parties has already been normalized by tech companies, and data dividends only further validates these practices.
This isn’t to say that distributing the money earned from personal information back to the consumer is necessarily a bad thing, it’s simply an issue entirely separate from privacy. If companies are required to give out data dividends, it would in no way lessen the importance of ensuring the privacy of our identities and data.
One of the main tenants of behavior science is something called “operant conditioning.” It’s a fancy phrase for a concept that’s actually pretty simple: a behavior followed by a reward is more likely to be repeated than a behavior followed by a punishment. While this is a pretty common sense idea, when it comes to our own goals, we don’t often think this way. Instead, we’ve grown up with a myth that true success comes only with struggle and that our biggest opponent is ourselves. Instead of focusing on our wins, we focus on our loses and think that to get anything accomplished we have to be hard on ourselves. And how well does that usually work out?
In order to create new behaviors that you can actually sustain, you need to have positive reinforcement. In other words, if you set yourself a goal that is too difficult or takes too long to achieve, your focus will be on what you’re doing wrong and lead you to give up. Instead, it’s important build on goals that you can actually achieve and feel positive about. This isn’t to say you shouldn’t set big goals for yourself, but that in, order to get there, you first have to focus on the wins: the small, achievable goals that you can then build upon to make the changes you want for yourself.
This is a lesson that most cybersecurity training programs have yet to understand. Phish simulation programs often will often focus on the loses: when you click on a phish or don’t report it to your IT department. Instead, accountability with compassion is far more effective for driving long term behavior change, and training programs that reward positive behaviors rather than punish bad ones are more likely to help users achieve their goals.
Using positive reinforcement and focusing on the wins helps us build the skills and abilities that enable us to do great things. And, perhaps after we have accomplished the large goal we were after, we’ll realize that the actual goal was to just feel better about ourselves.
Maybe the biggest misconception about forming new habits is that the biggest factor for success is the motivation to change. We often imagine that as long as we want to make a change in our lives, we have the power to do it. In fact, motivation is actually the least reliable element making behavior changes. The hard truth is that simply wanting to make a change is far from enough.
The reason? Motivation isn’t a static thing, it comes and goes in waves. It’s therefore tough to keep our motivation high enough to lead to lasting behavior change. Take the response to the COVID-19 pandemic, for example. When it appeared in the U.S, we were highly motivated to socially distance. As time went on, however, more and more people started to take risks and go out more. The reason isn’t because the dangers were any less present, but because our motivation to stay inside started to wane. The point is, if the sole component to any behavior change is motivation, once that motivation starts to diminish, so will the new habit.
Of course, we have to at some level want to make a change, but we also have to realize that it’s simply not enough. Instead, we need to rely more on starting with changes that requires the least amount of motivate necessary for it to occur. This is the idea behind BJ Fogg’s Tiny Habits that we wrote about last week. If you want to start reading more, it might be tempting to try reading a chapter or two every day. But more often than not, you’re not going to be motivated to keep that up for long. Instead, if your goal is just to read one paragraph of a day a couple times a day, you’re far more likely to keep up the new habit. Then, over time, you’ll find you need less motivation to read more and more, until you don’t even think about it any more.
This can be a hard pill to swallow. We like to believe that we can do anything we set our minds to, and it’s a little disheartening to think we don’t have as much control over our motivation as we might prefer. When looked at from a different angle, however, understanding this fact allows us to focus on what we can control: setting achievable goals and rewarding ourselves when we met them. Focusing on that rather than our inability to keep our motivation high will lead to more successful behavior change.
Given that phishing attacks are now the #1 cause of successful data breaches, it’s no surprise that many individuals and organizations are looking for tools to help them get better at spotting phish. The problem, however, is that most of the available education tools reply on “passive” training material: infographics, videos, and sample phish. While this educational tools might teach you a few facts and figures, they don’t always lead to a long term change in how users respond to phish. Instead, educators should be looking for new tools and methods that change the very way we look at our emails. You know the phrase “Give someone a fish, feed him for a day. Teach someone to fish, feed him for a lifetime”? Well, the same is true for phish too.
The idea is simple: Instead of just looking at examples of phish, by engaging in the process of creating a phish you will internalize the tactics and tricks scammers in real life and will be better able to spot them.
There is actually a method that has been proven to work in similar settings, such as recognizing propaganda and misinformation. It’s called inoculation theory. The idea is similar to how vaccines work: by exposing people to small doses of something more dangerous, and by actively engaging them in the process, they can better defend themselves against the real thing in the future. Cambridge University used this theory to create an online game that asks users to create their own fake news.
In a similar way, teaching someone how to make phish creates an engaging way for users to understand how actual phishers think and what tactics they use to trick people. We believe this form of training has the potential to be far more successful in help users create long lasting change and help them stay safer online.
In 1989 the U.S. Postal Service issued new stamps that featured four different kinds of dinosaurs. While the stamps look innocent enough, their release was the source of controversy among paleontologists, and even serves as an example of how misinformation works by making something false appear to be true.
The controversy revolves around the inclusion of the brontosaurus, which, according to scientists at that time, never existed. In 1874, paleontologist O.C. Marsh discovered the bones of what he thought was a new species of dinosaur. He called it the brontosaurus. However, as more scientists discovered similar fossils, they realized that what Marsh had found was in fact a species previous identified as an apatosaurus, which, ironically, is Greek for “deceptive lizard.” Paleontologists were therefore rightly upset to see the brontosaurus included on a stamp with real dinosaurs.
Over 30 year later, however, these stamps may have something to teach us about how disinformation works today. They show how disinformation is not simply about falsehoods — it’s about how those falsehoods are presented so as to seem true.
The stamps help illustrate this in three ways:
One of the ways something can appear to be true is when the information comes from a figure of authority. Because the stamps were officially released by the U.S. government, it gives the information contained on them the appearance of truth. Of course, no one would think the USPS is an authority on dinosaurs, and yet the very position of authority the postal service occupies seems to serve as a guarantee of the truth of what is presented. The appearance of authority, however wrongly placed it is, is often enough for us to believe something to be true.
This is a tactic used by scammers all the time. It’s the reason why you’ve probably gotten a lot of robocalls claiming to be the IRS. Phishing emails also use this tactic by spoofing the ‘from’ field and using logos of businesses and government agencies. We too often assume that, just because information appears to be coming from an authority, it must be true.
2) Truths and a Lie
Another way something false can appears true is by placing what is fake among things that are actually true. The fact that the other stamps in the collection — the tyrannosaurus, the stegosaurus, and the pteranodon — are real gives the brontosaurus the appearance of truth. By placing one piece of false information alongside recognizably true information, that piece of false information starts to look more and more like a truth.
Fake news on social media uses this tactic all the time. Phishing attacks also take advantage of this by replicating certain aspects of legitimate emails. This might include mentioning information in the news, such as COVID-19, or even including things like an unsubscribe link at the end of the email. This tactic works by using legitimate information and elements in an email to cover up what is fake.
The US Postal Service did not invent the brontosaurus: in fact, the American Museum of Natural History named a skeleton brontosaurus in 1905. Once a claim is stated as truth, it becomes very hard to dislodge. This was actually the reasoning the US Postal Service used when they were challenged: “Although now recognized by the scientific community as Apatosaurus, the name Brontosaurus was used for the stamp because it is more familiar to the general population.” Anchoring is a key aspect of disinformation, especially with regards to persistency.
Overall, what the brontosaurus stamp shows us is that our ability to discern the true from the false largely depends on how information is presented to us. Scammers and phishers have understood this for a long time. The first step in critically engaging with information online is therefore to recognize that just because something appears true does not, in fact, make it true. Given the continued rise of disinformation, this is a lesson that is more important now than ever. In fact, it is unlikely disinformation will ever become extinct.