By now, most everyone has heard about the threat of misinformation within our political system. At this point, fake news is old news. However, this doesn’t mean the threat is any less dangerous. In fact, over the last few years misinformation has spread beyond the political world and into the private sector. From a fake news story claiming that Coca-Cola was recalling Dasani water because of a deadly parasite in the bottles, to false reports that an Xbox killed a teenager, more and more businesses are facing online misinformation about their brands, damaging the reputations and financial stability of their organizations. While businesses may not think to take misinformation attacks into account when evaluating the cyber threat landscape, it’s more and more clear misinformation should be a primary concern for organizations. Just as businesses are beginning to understand the importance of being cyber-resilient, organizations need to also have policies in place to stay misinformation-resilient. This means organization need to start taking both a proactive and a reactive stance towards future misinformation attacks.
Perhaps the method of disinformation we are all most familiar with is the use of social media to quickly spread false or sensationalized information about a person or brand. However, there are a number of different guises disinformation can take. Fraudulent domains, for example, can be used to impersonate companies in order to misrepresent brands. Attackers also create copy cat sites that look like your website, but actually contain malware that visitors download when the visit the site. Inside personnel can weaponize digital tools to settle scores or hurt the company’s reputation — the water-cooler rumor mill now can now play out in very public and spreadable online spaces. And finally, attackers can create doctored videos called deep fakes that can create convincing videos of public figures saying things on camera they never actually said. You’ve probably seen deepfakes of politicians like Barak Obama or Nancy Pelosi, but these videos can also be used to impersonate business leadership that are shared online or circulated among staff.
With all of the different ways misinformation attacks can be used against businesses, its clear organizations need to be prepared to stay resilient in the face of any misinformation that appears. Here are 5 steps all organizations should take to build and maintain a misinformation-resilient business:
1. Monitor Social Media and Domains
Employees across various departments of your organization should be constantly keeping their ear to the ground by closely monitoring for any strange or unusual activity by and about your brand. Your marketing and social media team should be regularly keeping an eye on any chatter online about the brand and evaluate the veracity of claims being made, where they originate, and how widespread is the information is being shared.
At the same time, your IT department should be continuously looking for new domains that mention or closely resemble your brand. It’s common for scammers to create domains that impersonate brands in order to spread false information, phish for private information, or just seed confusion. The frequency of domain spoofing has sky-rocketed this year, as bad actors take advantage of the panic and confusion surrounding the COVID-19 pandemic. When it comes to spotting deepfakes, your IT team should invest in software that can detect whether images and recordings have been altered
Across all departments, your organization needs to keep an eye out for any potential misinformation attacks. Departments also need to be in regular communication with each other and with business leadership to evaluate the scope and severity of threats as soon as they appear.
2. Know When You Are Most Vulnerable
Often, scammers behind misinformation attacks are opportunists. They look for big news stories, moments of transition, or when investors will be keep a close eye on an organization in order to create attacks with the biggest impact. Broadcom’s shares plummeted after a fake memorandum from the US Department of Defense claimed an acquisition the company was about to make posed a threat to national security. Organization’s need to stay vigilant for moments that scammer can take advantage of, and prepare a response to any potential attack that could arise.
3. Create and Test a Response Plan
We’ve talked a lot about the importance of having a cybersecurity incident response plan, and the same rule is true for responding to misinformation. Just as with a cybersecurity attack, you shouldn’t wait to figure out a response until after attack has happened. Instead, organizations need to form a team from various levels within the company and create a detailed plan of how to respond to a misinformation campaign before it actually happens. Teams should know what resources will be needed to respond, who internally and externally needs to be notified of the incident, and which team members will respond to which aspect of the incident.
It’s also important to not just create a plan, but to test it as well. Running periodic simulations of a disinformation attack will not only help your team practice their response, but can also show you what areas of the response aren’t working, what wasn’t considered in the initial plan, and what needs to change to make sure your organization’s response runs like clock work when a real attack hits. Depending on the organization, it may make sense to include disinformation attacks within the cybersecurity response plan or to create a new plan and team specifically for disinformation.
4. Train Your Employees
Employees throughout the organizations should also be trained to understand the risks disinformation can pose to the business, and how to effectively spot and report any instances they may come across. Employees need to learn how to question images and videos they see, just as they should be wary links in an email They should be trained on how to quickly respond internally to disinformation originated from other insiders like disgruntled employees, and key personnel need to be trained on how to quickly respond to disinformation in the broader digital space.
5. Act Fast
Putting all of the above steps in place will enable organizations to take swift action again disinformation campaigns. Fake news spreads fast, so an organizations need to act just as quickly. From putting your response plan in motion, to communicating with your social media follow and stake-holders, to contacting social media platforms to have the disinformation content removed all need to happen quickly for your organization to stay ahead of the attack.
It may make sense to think of cybersecurity and misinformation as two completely separate issues, but more and more businesses are finding out that the two are closely intertwined. Phishing attacks rely on disinformation tactics, and fake news uses technical sophistications to make their content more convincing and harder to detect. In order to stay resilient to misinformation, businesses need to incorporate these issues into larger conversations about cybersecurity across all levels and departments of the organization. Preparing now and having a response plan in place can make all the difference in maintaining your business’s reputation when false information about your brand starts making the rounds online.
Earlier this month, a study by the University College London identified the top 20 security issues and crimes likely to be carried out with the use of artificial intelligence in the near future. Experts then ranked the list of future AI crimes by the potential risk associated with each crime. While some of the crimes are what you might expect to see in a movie — such as autonomous drone attacks or using driverless cars as a weapon — it turns out 4 out of the 6 crimes that are of highest concern are less glamorous, and instead focused on exploiting human vulnerabilities and bias’.
Here are the top 4 human-factored AI threats:
The ability for AI to fabricate visual and audio evidence, commonly called deepfakes, is the overall most concerning threat. The study warns that the use of deepfakes will “exploit people’s implicit trust in these media.” The concern is not only related to the use of AI to impersonate public figures, but also the ability to use deepfakes to trick individuals into transferring funds or handing over access to secure systems or sensitive information.
Other high-risk, human-factored AI threats include scalable spear-phishing attacks. At the moment, phishing emails targeting specific individuals requires time and energy to learn the victims interests and habits. However, AI can expedite this process by rapidly pulling information from social media or impersonating trusted third parties. AI can therefore make spear-phishing more likely to succeed and far easier to deploy on a mass scale.
Similarly, the study warns that AI can be used to harvest a mass information about individuals, identify those most vulnerable to blackmail, then send tailor-crafted threats to each victim. These large-scale blackmail schemes can also use deepfake technology to create fake evidence against those being blackmailed.
Lastly, the study highlights the risk of using AI to author highly convincing disinformation and fake news. Experts warn that AI will be able to learn what type of content will have the highest impact, and generate different versions of one article to be publish by variety of (fake) sources. This tactic can help disinformation spread even faster and make the it seem more believable. Disinformation has already been used to manipulate political events such as elections, and experts fear the scale and believability of AI-generated fake news will only increase the impact disinformation will have in the future.
The results of the study underscore the need to develop systems to identify AI-generated images and communications. However, that might not be enough. According to the study, when it comes to spotting deepfakes, “[c]hanges in citizen behaviour might [ ] be the only effective defence.” With the majority of the highest risk crimes being human-factored threats, focusing on our own ability to understand ourselves and developing behaviors that give us the space to reflect before we react may therefore become to most important tool we have against these threats.
Last week, IBM and The Ponemon Institute released their annual Cost of a Data Breach Report. For the past 15 years, the report has highlighted recurring and emerging factors that contribute to the cost of data data breaches, as well as the root causes of those breaches. One of the key findings in this year’s report is the fact that human factored cyber attacks not only make up a large percentage of the all malicious attacks, but also are incredibly costly to businesses that suffered breaches. This only confirms the importance of cyber awareness training for employees to limit the risk of a human factored attack.
There are many different causes of a data breach, some of which are merely accidental. However, according to this year’s report, malicious attacks now make up 52% of all breaches. This didn’t used to be the case. In fact, malicious attacks have seen a 24% growth rate in just six years. Malicious attacks are also the most expensive, costing businesses an average of $4.27 million. That’s nearly $1 million more than all other causes of a breach.
Given the frequency and cost of malicious attacks, it’s important to look closer at the different threats that account for the rise in malicious attacks — and the data is surprising. While expected threats such as system vulnerabilities and malicious insiders are certainly present, human factored cyber attacks take up a large chunk of all malicious attacks. Threats ranging from phishing attacks, to business email compromise, to social engineering and cloud misconfigurations are all rooted in human rather than technical vulnerability, and account for 41% of all malicious attacks leading to data breaches. Indeed this report correlates with what was presented in the Verizion 2020 Data Breach Investigations Report.
Human factored cyber attacks aren’t something you can protect yourself against strictly through technically safeguards. Instead protecting against these vulnerability requires working with employees, establish proper quality control protocols, ensuring your have the right expertise on your team and using cyber awareness training to help build safer online habits.
As a Fortune 100 CISO once told me, “at the end of the day, every cyber incident starts with someone making a decision.”
One of the main tenants of behavior science is something called “operant conditioning.” It’s a fancy phrase for a concept that’s actually pretty simple: a behavior followed by a reward is more likely to be repeated than a behavior followed by a punishment. While this is a pretty common sense idea, when it comes to our own goals, we don’t often think this way. Instead, we’ve grown up with a myth that true success comes only with struggle and that our biggest opponent is ourselves. Instead of focusing on our wins, we focus on our loses and think that to get anything accomplished we have to be hard on ourselves. And how well does that usually work out?
In order to create new behaviors that you can actually sustain, you need to have positive reinforcement. In other words, if you set yourself a goal that is too difficult or takes too long to achieve, your focus will be on what you’re doing wrong and lead you to give up. Instead, it’s important build on goals that you can actually achieve and feel positive about. This isn’t to say you shouldn’t set big goals for yourself, but that in, order to get there, you first have to focus on the wins: the small, achievable goals that you can then build upon to make the changes you want for yourself.
This is a lesson that most cybersecurity training programs have yet to understand. Phish simulation programs often will often focus on the loses: when you click on a phish or don’t report it to your IT department. Instead, accountability with compassion is far more effective for driving long term behavior change, and training programs that reward positive behaviors rather than punish bad ones are more likely to help users achieve their goals.
Using positive reinforcement and focusing on the wins helps us build the skills and abilities that enable us to do great things. And, perhaps after we have accomplished the large goal we were after, we’ll realize that the actual goal was to just feel better about ourselves.
The good news: Many companies these days are using cybersecurity controls and security training for their employees. The bad news: A lot of these businesses are putting in the place the bare minimum in order to meet compliance requirements. The truth is, however, the you can be compliant but not secure. Remember the big Target breach in 2013? Hackers were able to take the debit and credit card information of millions are shoppers by accessing Target point-of-sale systems. The irony is that, just months before the attack, Target was certified PCI compliant. In the words of then-CEO Gregg Steinhafel, “Target was certified as meeting the standard for the payment card industry in September 2013. Nonetheless, we suffered a data breach.” Simply put: Target was compliant but not secure.
Creating a Culture
If your security awareness program is a “check the box” compliance program, you can bet your employees are going through the same motions as you are. How has that improved your security posture? It hasn’t. Instead, creating a strong security program is first and foremost about creating a culture around security. And this has to start at the top, with your executive officers and your board. If business leaders set a security-focused tone, then employees will likely follow suit.
The reason a business can be compliant and not secure is because cybersecurity isn’t a one and done deal. Compliance is a state, cybersecurity is an ongoing process that involves the entire organization — from the boardroom to the cubicle. Verizon Data Breach Investigation Report shows that the human factor is the largest factor leading to breaches today. If that’s the case, perhaps instead of checking off the boxes and before investing in that new machine learning intrusion detection gizmo, consider focusing on human learning, engagement and the behaviors that can drive a mindful security culture.
When it comes to cybersecurity, our minds usually jump to complicated technical protections that only your IT department understands. And while these safeguards are certainly important, the truth is hackers are increasingly focusing on social engineering attacks to get into our networks. In fact, phishing attacks are now the number one cause of successful data breaches. Employees are therefore often the first line of defense against cyber attacks. That’s why more and more cybersecurity experts are emphasizing the importance of security training for employees. Business owners need to feel confident that their employees are developing online behaviors that keep the organization secure. The problem, however, is that traditional training programs aren’t always successful in achieving these behavior changes. This is, in part, because training programs too often use “gotcha!” methods when employees make a mistake, which only discourages employees instead of motivating them. Organizations should therefore focus on programs that use positive reinforcement in security training.
One popular form of cybersecurity training is phish simulation programs, where employees are spent emails designed to look like popular phishing scams. The problem, however, is that these programs always always rely on the gotcha method. When an employee clicks on a link in a fake phishing email, typically they will see a screen telling them they got caught and are then instructed to watch an informative video. The problem is that this approach causes the employee to associate negative emotions with the training and therefore reduces the likelihood of sustained behavior change. Simply put, this type of training creates a punitive environment that discourages the individual but doesn’t create meaningful change.
Instead, one study has shown that using positive reinforcement in security training actually produces safer, longer lasting online habits. Instead of punishing bad behavior, it’s actually more effective to focus on rewarding behavior you want to see, such as reporting phish: “By focusing on helping people feel successful, the campaign produced a positive result: a 30% reduction in overall phish susceptibility, and for individuals who had already been identified as habitual “phish clickers”, a reduction from 35% susceptivity to 0%.”
The key is the associate positive behaviors with positive feelings. It’s a small thing, but the impact could help businesses save a lot of time and money down the road.