We’ve written before about how the disruption and confusion of the COVID-19 pandemic has caused an uptick in phishing and disinformation campaigns. Yet, there is another dimension to this that is just beginning to become clear: how the isolation of remote work helps to create the conditions necessary for disinformation to take root.
In a report on the impacts of remote and hybrid work on employees, Microsoft highlights how remote work has shrunk our networks. Despite the ability to use video services like Zoom and Microsoft Teams to collaborate with others across the globe, the data reveals that remote work has actually caused us to consolidate our interactions to just those we work closely with, and far less with our extend networks. The result is that employees and teams have become siloed, creating a sort of echo chamber in which new and diverse perspectives are lost. According to Dr. Nancy Baym, Senior Principal Researcher at Microsoft, when are networks shrink, “it’s harder for new ideas to get in and groupthink becomes a serious possibility.”
The gap between interactions with our close network and our distant network created by remote work doesn’t just stifle innovation, it’s also what creates the conditions necessary for disinformation to thrive. When we are only exposed to information and perspectives that are familiar to us, it becomes harder and harder to question what we are being presented. If, for example, we are in a network of people who all believe Elvis is still alive, without exposure to other people who think Elvis in fact isn’t alive we would probably just assume there isn’t any reason to question what those around us are telling us.
The point is, without actively immersing ourselves within networks with differing perspectives, it becomes difficult to exercise our critical thinking abilities and make informed decisions about the validity of the information we are seeing. Remote and hybrid work is likely going to stick around long after the pandemic is over, but that doesn’t mean there aren’t steps we can take to ensure we don’t remained siloed within our shrunken networks. In order to combat disinformation within these shrunken networks we can:
1. Play the Contrarian
When being presented with new information, one of the most important ways to ensure we don’t blindly accept something that may not be true is to play the contrarian and take up the opposite point of view. You may ultimately find that the opposite perspective doesn’t make sense, but will help you take a step back from what you are being shown and give you the chance to recognize there may be more to the story than what you are seeing.
2. Engage Others
It may seem obvious, but engaging with opinions and perspectives that are different than what we are accustomed to is essential to breaking free of the type of groupthink that disinformation thrives on. It can also be a lot harder than it sounds. The online media ecosystem isn’t designed to show you a wide range of perspectives. Instead, it’s up to us to take the time to research other points of view and actively seek out others who see things differently.
3. Do a Stress Test
Once you have a better sense of the diversity of perspectives on any given topic, you’re now in a position to use your own critical thinking skills to evaluate what you — and not those around you — think is true. Taking in all sides of an issue, you can then apply a stress test in which you try to disprove each point of view. Which ever perspective seems to hold up the best or is hardest to challenge will give you a good base to make an informed decision about what you think is most legitimate.
From our personal lives to the office, searching for opposite and conflicting perspectives will help build resilience against the effects of disinformation. It can also even help to be more effective at spotting phish and social media campaigns. By looking past the tactics designed to trick us into clicking on a link or giving away information, and taking a few seconds to take a breathe, examine what we are looking at, and stress test the information we are being shown, we can be a lot more confident in our ability to tell the difference between phish and phriend.
Breaches happen all the time, but every so often one of those breaches breaks through into national headlines, serving as a watershed moment about where we are and where we need to be with regards to cybersecurity.One of those watershed moments occurred last December when it was revealed that Russian state-sponsored hackers breached the software developer SolarWinds, and from there managed to access some pretty tightly-sealed networks and systems across public and private sectors. But what exactly happened? Who does it effect? What can we learn to better protect our organizations?
One of the most striking aspects of the SolarWinds hack is that it was years in the making, taking a huge amount of discipline and patience to pull off and stay undetected. Forensic evidence found that the hackers gained access to Orion, the SolarWinds product that was compromised, back in late 2019. Yet, at that time, the hackers didn’t actually make any changes or launch an attack. Instead, they sat and waited in order to monitor, learn, and test SolarWind’s system to ensure they wouldn’t be caught.
Then, months later in May 2020, the hackers made their move — but not in the way most would expect. Typically, when someone wants to infect a piece of software with malware, they will modify the code behind the software. However, because security experts know to look for code modifications, these hackers decided to instead install their malware directly onto the software product itself. So, when an update for Orion was released, government agencies, and companies big and small downloaded an update that contained a backdoor for the hackers.
Between May, when malware was initially launched, and December, when the hack was discovered, the attackers were able to move throughout the networks and systems of any company using SolarWinds’ software that they wanted. And they were targeted, going after the emails of specific, high-valued individuals within affected organizations. From there, the goal was to maintain access, move around infected system, and hold onto access of specific individuals’ communications.
Much has been made about the level of sophistication involved in the attack — and it was. However, at root, this is a story about 3rd party risk. We’ve written before about the importance of vendor management, and the SolarWinds hack is an extreme case in point. Because most organization’s today depend in large part on 3rd party providers for everything from cloud storage, to product platforms, to network security, an attack like this doesn’t have a definitive end. Instead, the SolarWinds attack has the potential ripple across a web of interconnected organizations across the supply chain. According to Steven Adair, a security expert who helped with the incident response for SolarWind, the attackers “had access to numerous networks and systems that would allow them to rise and repeat [the] SolarWinds [attack] probably on numerous different scales in numerous different ways.” It’s therefore possible — and perhaps likely — that the full effects of the hack are still to be revealed.
If that doesn’t serve as a wake up call, we don’t know what will. And as it turns out, there are a number of effective and achievable steps organizations can take to mitigate 3rd party risk.
1. The Basics
It may not seem like much, but simply maintaining basic digital hygiene plays a big role in protecting against attacks. Strong password management, using multi-factored authentication, and network segmentation should be a cybersecurity baseline for all organizations. These are simple steps that serve as an organization’s first line of defense against an attack.
2. The Rule of Least Privilege
The rule of least privilege essential means providing the least amount of access for the least amount of time to systems and networks. This involves setting limits on what access you give to products and software as well as actively monitoring access privileges for employees, contractors, and vendors. Essentially, if something or someone doesn’t need access to a piece of your system, they shouldn’t be able to access it. If someone need access to a part of your network for 2 days, then their privileges should expire after 2 days. This will limit the ability for malicious users to move around systems, potentially preventing them from spreading to other, more sensitive environments.
A lot of organizations these days maintain event logs, which essentially keep a record of all network activity. While logs might not directly prevent a breach, these records are vital to asses the potentially damage and scope of an attack, allowing organizations to act swiftly and forcefully to remove the threat. However, keeping logs isn’t enough, it’s essential to also retain these logs. SolarWinds policy was to remove these logs after 90 days. The problem, of course, was that the attack was discovered far more than three months after the hackers breached the system, effectively making it impossible to gain any detailed insight into what the hackers were doing prior to August of 2020.
Combining Business and Security
We’ve said it before and we’ll say it again: it’s easy to see security needs as at best a nuisance and at worst a barrier towards optimal business performance, but this simply isn’t the case. As Steven Adair points out, a small company doesn’t need to hit the ground running with the best security products and a million code audits right out the gate. However, if businesses incorporate security concerns within business strategies, these organization can start to ask themselves: “Where are we now, what can we do now, and what can we do along the way?” Asking those questions might just make the difference down the road when the next watershed moment strikes.
Earlier this year we wrote about the fact that cyber attacks cost businesses millions of dollars per incident. But what about the cost of cybercrime on larger scale? This month, McAfee released a new report analyzing at the cost of cybercrime globally, and the findings are staggering.
The most startling news from the report is the jump in the overall cost of cybercrime globally. Between 2018 and 2020, McAfee found a nearly 50% increase in average global cost. Now, the estimated global cost of cybercrime is $945 billion — more than 1% of the global GDP.
Just as startling, however, is that the report found a myriad of additional damages organizations face after a cyber incident beyond direct financial costs. In their report, McAfee found that 92% of organizations surveyed identified “hidden costs” that effected them beyond direct monetary losses. These hidden costs can have long terms effects on an organization’s productivity and ability to prevent future attacks.
One of the main hidden costs the report covers is the “damage to company performance” after a cyber incident. These damages, according to the report, is primarily related to a loss in productivity and lost work hours as businesses attempt to recover from an attack — usually because system downtime and disruptions to normal operations. While these losses might be, to some extent, inevitable following an attack, McAfee’s report found that organizations routinely neglect one essential aspect of cybersecurity: communication within the organizations.
We’ve talked before about the importance of creating an incident response plan, but without communication and cooperation between all areas of an organization, these plans won’t be all that effective. According to the report, IT decision makers think some departments aren’t ever made aware that a cyber incident even happened. The breakdown in communication is especially damaging between IT and business leadership. “IT and line-of-business (LOB) decision makers,” the report says, “have different understandings of what, why, and how a company or government agency is experiencing an IT security incident.” In fact, the lack of communication goes so far as whether or not there is even a response plan at all. The report found that, in general, business leadership often believe there is a response plan in place when there isn’t one.
This lack of communication also extends to the nature and scope of an organization’s cyber risk. The report noted a significant lack of organization-wide understand of cyber risk, which, the report states, “makes companies and agencies vulnerable to social engineering tactics. Once a user is hacked, they do not always recognize the problem in time to stop the spread of malware.”
While there will almost always be disruptions and hidden costs following a cyber incident, McAfee’s report seems to indicate many of these losses are self-inflicted. The report shows that the most common change organizations make after a cyber incident is investment in new security software. And, while technical safeguards are certainly necessary, they are far from sufficient. Instead, organizations need to begin investing in policies and procedures that ensure organization-wide communication, knowledge, and response to cyber risk and incidents.
Ever since Apple announced new privacy features included in the release of OS 14, Facebook has waged a war against the company, arguing that these new features will adversely effect small businesses and their ability to advertise online. What makes these attacks so “laughable” is not just Facebook’s disingenuous posturing as the protector of small businesses, but that their campaign against Apple suggests privacy and business are fundamentally opposed to each other. This is just plain wrong. We’ve said it before and we’ll say is again: Privacy is good for business.
In June, Apple announced that their new mobile operating system, OS 14, would include a feature called “AppTrackingTransparency” that requires apps to seek permission from users before tracking activity between others apps and websites. This feature is a big step towards prioritizing user control of data and the right to privacy. However, in the months following Apple’s announcement, Facebook has waged a campaign against Apple and their new privacy feature. In a blog post earlier this month, Faceboook claims that “Apple’s policy will make it much harder for small businesses to reach their target audience, which will limit their growth and their ability to compete with big companies.”
And Facebook didn’t stop there. They even took out full-page ads in the New York Times, Wall Street Journal and Washington Post to make their point.
Given the fact that Facebook is currently being sued by more than 40 states for antitrust violations, there is some pretty heavy irony in the company’s stance as the protector of small business. Yet, this is only scratches the surface of what Facebook gets wrong in their attacks against Apple’s privacy features.
While targeted online adverting has been heralded as a more effective way for business to reach new audiences and start turning a profit, the groups that benefit the most from these highly-targeted ad practices are in reality gigantic data brokers. In response to Facebook’s attacks, Apple released a letter, saying that “the current data arms race primarily benefits big businesses with big data sets.”
The privacy advocacy non-profit, Electronic Frontier Foundation, reenforced Apple’s point and called Facebook’s claims “laughable.” Start ups and small business, used to be able to support themselves by running ads on their website or app. Now, however, nearly the entire online advertising ecosystem is controlled by companies like Facebook and Google, who not only distribute ads across platforms and services, but also collect, analyze and sell the data gained through these ads. Because these companies have a strangle hold on the market, they also rake in the majority of the profits. A study by The Association of National Advertisers found that publishers only get back between 30 and 40 cents of every dollar spent on ads. The rest, the EFF says, “goes to third-party data brokers [like Facebook and Google] who keep the lights on by exploiting your information, and not to small businesses trying to work within a broken system to reach their customers.”
Because tech giants such as Facebook have overwhelming control on online advertising practices, small businesses that want to run ads have no choice but to use highly-invasive targeting methods that end up benefitting Facebook more than these small businesses. Facebook’s claim that their crusade against Apple’s new privacy features is meant to help small businesses just simply doesn’t hold water. Instead, Facebook has a vested interest in maintaining the idea that privacy and business are fundamentally opposed to one another because that position suits their business model.
At the end of the day, the problem facing small business is not about privacy. The problem is the fundamental imbalance between a handful of gigantic tech companies and everyone else. The move by Apple to ensure all apps are playing by the same rules and protecting the privacy of their users is a good step towards leveling the playing field and thereby actually helping small business grow.
This also shows the potential benefits of a federal, baseline privacy regulation. Currently, U.S. privacy regulations are enacted and enforced on the state level, which, while a step in the right direction, can end up staggering business growth as organizations attempt to navigate various regulations with different levels of requirements. In fact, last year CEOs sent a letter to congress urging the government to put in place federal privacy regulations, saying that “as the regulatory landscape becomes increasingly fragmented and more complex, U.S. innovation and global competitiveness in the digital economy are threatened” and that “innovation thrives under clearly defined and consistently applied rules.”
Lastly, we recently wrote about how consumers are more willing to pay more for services that don’t collect excessive amounts of data on their users.This suggests that surveillance advertising and predatory tracking do not build customers, they build transactions. Apple’s new privacy features open up a space for business to use privacy-by-design principles in their advertising and services, providing a channel for those customers that place a value on their privacy.
Privacy is not bad for business, it’s only bad for business models like Facebook’s. By leveling the playing field and providing a space for new, privacy-minded business models to proliferate, we may start to see more organizations realize that privacy and business are actually quite compatible.
In recent years, much has been made of the privacy paradox: the idea that, while people say they value their privacy, their online behaviors show they are more willing to give away personal information than they’d like to think. Tech giants like Facebook and Google have faced a number of highly public privacy standards, yet millions upon millions of users continue to use these services every day. However, what happens when we think of the value of privacy not in terms of how much we want to protect our privacy, but instead in terms of much we are willing to spend to keep our data private. Newly published research does just that and found that, when looking at the dollar value people place on privacy, there might not be as much as a paradox as we suspected, and business can even learn to leverage the market value of privacy to better understand what they should (and shouldn’t) collect from consumers.
The new study, conducted by assistant professor at the London School of Economic Huan Tang, analyzed how much personal information users in China were willing to disclose in exchange for consumer loans. Official credit scores do not exist in China, so consumers typically have to give over a significant amount of personal information in order for banks to assess their credit. By looking at the decisions of 320,000 users on a popular Chinese lending platform, Tang was able to compare user’s willingness to disclose certain pieces of sensitive information against the cost of borrowing.
The results? Tang found that users were willing to disclosure sensitive information in exchange for an average of $33 reduction in loan fees. While for many in the U.S., $33 may not seem all that significant, $33 actually represents 70% of the daily salary in China, showing users place a significantly high value on their privacy. What’s more, on the bank’s side this translates to 10% decrease in revenue when they require users to disclosure additional personal information.
There are a number of important implications of these study for businesses. For one, it suggests, as Tang says, “that maybe there is no ‘privacy paradox’ after all,” meaning consumers’ online behaviors do, in fact, seem to show a value on protecting people’s right to privacy. While today businesses often utilize the data they collect to make money, by collecting everything and anything they can get their hands on, businesses may be losing significant revenue in lost business. According to Tang, collecting more information than necessary turns out to be inefficient. Instead, business can leverage the monetary value users place on their data to be more discerning when deciding what information to collect. If a piece of data is highly valued by consumers and has little direct economic benefits for a company, it may not be worth collecting. Of course, limiting data is a key tenet of Privacy by Design principles, which organization should be applying to our their practices in order to improve their privacy posture vis-a-vis GDPR and other privacy regulations. Limiting data also improves the organization’s cybersecurity posture because it reduces its exposure.
While it may seem counter intuitive in today’s standard practice of collecting as much data as possible, this study shows that limiting the data that is collected can be, according to Tang, a “win-win” for businesses and consumers alike.
Every so often something comes along and disrupts the normal order of things, and out of that disruption a something new emerges. It’s certainly not a stretch to say that 2020 has brought plenty of disruptions with it, and according to a recent report by Gartner, businesses are starting “reset” how they operate and implement new strategies reliant on emerging, more sophisticated technologies. In the report, Gartner lists a number of predictions for what the future of business will look like. Perhaps the most startling prediction the report makes is the increase in workplace surveillance: “By 2025, 75% of conversations at work will be recorded and analyzed, enabling the discovery of added organizational value or risk.” Whether this prediction will turn out to be true is up for debate, however the tone of the report seems to imply there isn’t much we can do about it. The problem, of course, is that these changes don’t appear out of thin air. People create the change. This means, if Gartner’s prediction turns out to be true, we aren’t completely helpless and could even play a role in building new technologies based on the values and ethics people share. Just like there is a movement in cybersecurity to create technologies that are based on privacy by design, as we begin moving towards a new future, we also need to focus on creating technology based on an ethics by design that promotes the well-being and rights of individual
While the idea of having every conversation and interaction you have at work recorded and analyzed probably doesn’t sound to appealing to employees, Gartner’s report highlights the possible benefits this will have for businesses. As Magnus Revang, research vice president at Gartner, explained to Tech Republic, “By analyzing these communications, organizations could identify sources of innovation and coaching throughout a company.” This may certainly be true. In fact, organizations could even use this data to help improve the workplace for employees.
Of course, if we’ve learned anything in the past decade, the technology that is used for good can also be used for bad. And Revang recognizes the risk involved with this shift. “I definitely think there [are] companies that are going to use technology like this and misuse it, and step over the line of what you would call ethical or moral.” When used correctly, however, Revang belives the benefits of the this technology will outweigh any possible risks.
The problem with this argument, however, is that it assumes the problem is not with the technology itself, but the people who use it. According to Tech Republic, Revang believes “technology is inherently neutral, however the way an organization chooses to deploy and use a technology is another consideration.” What this way of thinking doesn’t consider, however, is that technology is built by people — people who are certainly far from neutral. As Joan Donovan, a social science researcher at Harvard University, recently put it, the technology we build encodes “a vision of society and the economy.”
Humans are flawed, and technology is stained with our flaws before it is even operationalized. So, when looking towards the future of technology in business, without designing these new innovations with an ethics in mind, our underlining biases and flaws will play a big role in the consequences this technology will have for our everyday lives. This has huge implications in every facet of society, and unfortunately, our ethical oversight structures are very weak to mitigate these threats.
There’s talk about privacy by design principles and there are AI-bias frameworks being developed. But, in order to create technologies that support our better angels and not our worse impulses, we need experts across all fields and sectors to work together in order to understand and develop ethics by design principles that can help build technologies that are not only useful, but that reflect the values and ideals for a more just and equitable society.
Yesterday, I received an email from a business acquaintance that included an invoice. I knew this person and his business but did not recall him every doing anything for me that would necessitate a payment. I called him to about the email and he said that his account had been indeed hacked and those emails were not from him. What occurred was an example of business email compromise (BEC) using stolen credentials.
Typically, BEC is a form of cyber attack where attackers create fake emails that impersonate executives in order to convince employees to send money to a bank account controlled by the bad guys. According to the FBI, BEC is the costliest form of cyber attack, scamming business out of $1.7 billion in 2019 alone. One reason these attacks are becoming so successful is because attackers are upping their game: instead of creating fake email address that looklike a CEO or a vendor, attackers are now learning to steal login info to make their scams that much more convincing.
By compromising credentials, BEC attackers have opened up multiple new avenues to carry out their attack and increase the change of success. Among all the ways compromised credentials can be used for BEC attacks, here are 3 that every business should know about.
Vendor Email Compromise
One way BEC attackers can use compromised credentials has been called vendor email compromise. The name, however, is a little misleading, because vendors aren’t actually the target of the attack. Instead, they are the means to carry an attack out on a business. Essentially, BEC attackers will compromise the email credentials of an employee at the billing department of a vendor, then send invoices from that email to businesses requesting they make payment to a bank account controlled by the attackers.
Another way attackers can use compromised credentials to carry out BEC scams is to use the credentials of someone in the finance or accounting department of an organizations to make payment requests to other employees and suppliers. By using the actual email of someone within the company, payments requests look far more legitimate and increase the change that the scam will succeed.
What’s more, attackers can use compromised credentials of someone in the billing department to even target customers for payment. Of course, if the customers make a payment, it goes to the attackers and not to the company they think they are paying. This is a new method of BEC, but one that is gaining steam. In a press release earlier this year, the FBI warned of the use of compromised credentials in BEC to target customers.
Advanced Intel Gathering
Another method to use compromised credentials for BEC doesn’t even involve using the compromised account to request payments. Instead, attackers will gain access to the email account of an employee in the finance department and simply gather information. With enough time, attackers can study who the business releases funds to, how often, and what the payment requests look like. With all of this information under their belt, attackers will then create a near-perfect impersonation of the entity requesting payment and send the request exactly when the business is expecting it.
Attackers have even figured out a way to retain access to employee’s emails after they’ve been locked out of the account. Once they’ve gained access to an employee’s inbox, attackers will often set the account to auto-forward any emails the employee receives to an account controlled by the attacker. That way, if the employee changes their password, the attacker can still view every message the employee receives.
What you can do
All three of these emerging attack methods attack should make businesses realize that BEC is a real and dangerous threat. It can be far harder to detect a BEC attack when the attackers are sending emails from a real address or using insider information from compromised credentials to expertly impersonate a vendor. Attackers can gain access to these credentials in a number of ways. First, through initial phishing attacks designed to capture employee credentials. Earlier this year, for example, attackers launched a spear phishing campaign to gather the credentials of finance executives‘ Microsoft 365 accounts in order to then carry out a BEC attack. Attackers can also pay for credentials on the dark web that were stolen in past data breaches. Even though these breaches often involve credentials of employees’ personal accounts, if an employee uses the same login info for every account, then attackers will have easy access to carry out their next BEC scam.
While the use of compromised credentials can make BEC harder to detect, there are a number of things organizations can do to protect themselves. First, businesses should ensure all employees—and vendors!—are properly trained in spotting and identifying phishing attacks. Second, organizations should require proper password management is for all users. Employees should use different credentials for every account, and multi-factor authentication should be enabled for vulnerable accounts such as email. Lastly, organization should disable or limit the auto-forwarding to prevent attackers from continuing to capture emails received by a targeted employee.
Businesses should also ensure employees in the finance department receive additional BEC training. A report earlier this year found an 87% increase in BEC attacks targeting employees in finance departments. Ensuring employees in the finance department know, for example, to confirm any changes to a vendor’s bank information before releasing funds, is key to protecting your organization from falling prey to the increasingly sophisticated BEC landscape.
Last week, on the Tuesday before Thanksgiving, state auditors released a report detailing “significant risks” within the Baltimore Country School District’s computer network. The next day, the school district was hit with a ransomware attack that shut the school down until Wednesday of this week. Because of the increase in COVID-19 cases, the district had just shifted online. However, the ransomware attack put a stop to remote learning and gave over 115,000 students an extra week off school.
The state auditor’s report, released just the day before the attack, details the findings of an investigation into the security of the district’s computer systems that was conducted between May 2019 and February 2020. One of the major findings of the report showed that 26 publicly-accessible severs were located within the districts internal network, rather than segregated in external networks. This increases the risk of a user accessing the district’s internal systems via the public servers. In addition, the report that the district did not have adequate protections in place to secure personally identifiable information, there was no detection system in place to catch unwanted traffic, and students even had “unnecessary network-level access to administrative servers.”
The district has said it is too early to tell whether the attack was related to the vulnerabilities found in the auditor’s report. However, it is certainly possible the lack of network segmentation could have possibly made it easier for the ransomware to spread across systems and devices. The district has also not said whether any personally identifiable information was compromised in the attack.
Despite the district’s tight lips surrounding the specifics of the attack, they did ask all students and staff to perform a “confidence check” on school-issues devices, which potentially sheds light on some of the details. Specially, the district is asking students and staff to look for .ryk file extensions on their devices. This file extension likely points to an increasing common form of ransomware called Ryuk. Ryuk is a form of ransomware that encrypts data within the network. This may be a relief to school officials, given the recent trend in ransomware where attackers actually steal and leak sensitive data rather than just encrypt it within the network.. However, Ryuk is also infamous for its ability to quickly spread across devices connected to the network, including back-ups. This makes the state auditor’s findings potentially highly relevant to the scope and impact this attack has caused so far.
The Baltimore School District’s ransomware attack is unfortunately not entirely surprising. In the past few years, attackers have started targeting public agencies and schools. Because public entities often don’t have the budget or personnel for sophisticated cybersecurity defense and their services are essential for many people, attackers see these as juicy targets for ransomware attacks.
This doesn’t mean, however, that public agencies need to be sitting ducks. If the district had intrusion detection system in place, for instance, it’s possible they could’ve caught attack before it even started. The fact that students also had access to certain administrative servers is also a big problem, and could be easily fixed with simple access control measures put in place. Lastly, while you can’t always prevent these attacks from happening, segregating networks and devices can go a long way towards limiting the impact of ransomware. This will not only help prevent the spread of the attack throughout the network, but, if back-ups are routinely tested and stored offline, could allow organization’s to easily restore their systems to a pre-attack state without paying a ransom. The attack against the Baltimore School district is a stark example of the importance of creating not just a cyber-secure, but also a cyber-resilient online environment.
By now, most everyone has heard about the threat of misinformation within our political system. At this point, fake news is old news. However, this doesn’t mean the threat is any less dangerous. In fact, over the last few years misinformation has spread beyond the political world and into the private sector. From a fake news story claiming that Coca-Cola was recalling Dasani water because of a deadly parasite in the bottles, to false reports that an Xbox killed a teenager, more and more businesses are facing online misinformation about their brands, damaging the reputations and financial stability of their organizations. While businesses may not think to take misinformation attacks into account when evaluating the cyber threat landscape, it’s more and more clear misinformation should be a primary concern for organizations. Just as businesses are beginning to understand the importance of being cyber-resilient, organizations need to also have policies in place to stay misinformation-resilient. This means organization need to start taking both a proactive and a reactive stance towards future misinformation attacks.
Perhaps the method of disinformation we are all most familiar with is the use of social media to quickly spread false or sensationalized information about a person or brand. However, there are a number of different guises disinformation can take. Fraudulent domains, for example, can be used to impersonate companies in order to misrepresent brands. Attackers also create copy cat sites that look like your website, but actually contain malware that visitors download when the visit the site. Inside personnel can weaponize digital tools to settle scores or hurt the company’s reputation — the water-cooler rumor mill now can now play out in very public and spreadable online spaces. And finally, attackers can create doctored videos called deep fakes that can create convincing videos of public figures saying things on camera they never actually said. You’ve probably seen deepfakes of politicians like Barak Obama or Nancy Pelosi, but these videos can also be used to impersonate business leadership that are shared online or circulated among staff.
With all of the different ways misinformation attacks can be used against businesses, its clear organizations need to be prepared to stay resilient in the face of any misinformation that appears. Here are 5 steps all organizations should take to build and maintain a misinformation-resilient business:
1. Monitor Social Media and Domains
Employees across various departments of your organization should be constantly keeping their ear to the ground by closely monitoring for any strange or unusual activity by and about your brand. Your marketing and social media team should be regularly keeping an eye on any chatter online about the brand and evaluate the veracity of claims being made, where they originate, and how widespread is the information is being shared.
At the same time, your IT department should be continuously looking for new domains that mention or closely resemble your brand. It’s common for scammers to create domains that impersonate brands in order to spread false information, phish for private information, or just seed confusion. The frequency of domain spoofing has sky-rocketed this year, as bad actors take advantage of the panic and confusion surrounding the COVID-19 pandemic. When it comes to spotting deepfakes, your IT team should invest in software that can detect whether images and recordings have been altered
Across all departments, your organization needs to keep an eye out for any potential misinformation attacks. Departments also need to be in regular communication with each other and with business leadership to evaluate the scope and severity of threats as soon as they appear.
2. Know When You Are Most Vulnerable
Often, scammers behind misinformation attacks are opportunists. They look for big news stories, moments of transition, or when investors will be keep a close eye on an organization in order to create attacks with the biggest impact. Broadcom’s shares plummeted after a fake memorandum from the US Department of Defense claimed an acquisition the company was about to make posed a threat to national security. Organization’s need to stay vigilant for moments that scammer can take advantage of, and prepare a response to any potential attack that could arise.
3. Create and Test a Response Plan
We’ve talked a lot about the importance of having a cybersecurity incident response plan, and the same rule is true for responding to misinformation. Just as with a cybersecurity attack, you shouldn’t wait to figure out a response until after attack has happened. Instead, organizations need to form a team from various levels within the company and create a detailed plan of how to respond to a misinformation campaign before it actually happens. Teams should know what resources will be needed to respond, who internally and externally needs to be notified of the incident, and which team members will respond to which aspect of the incident.
It’s also important to not just create a plan, but to test it as well. Running periodic simulations of a disinformation attack will not only help your team practice their response, but can also show you what areas of the response aren’t working, what wasn’t considered in the initial plan, and what needs to change to make sure your organization’s response runs like clock work when a real attack hits. Depending on the organization, it may make sense to include disinformation attacks within the cybersecurity response plan or to create a new plan and team specifically for disinformation.
4. Train Your Employees
Employees throughout the organizations should also be trained to understand the risks disinformation can pose to the business, and how to effectively spot and report any instances they may come across. Employees need to learn how to question images and videos they see, just as they should be wary links in an email They should be trained on how to quickly respond internally to disinformation originated from other insiders like disgruntled employees, and key personnel need to be trained on how to quickly respond to disinformation in the broader digital space.
5. Act Fast
Putting all of the above steps in place will enable organizations to take swift action again disinformation campaigns. Fake news spreads fast, so an organizations need to act just as quickly. From putting your response plan in motion, to communicating with your social media follow and stake-holders, to contacting social media platforms to have the disinformation content removed all need to happen quickly for your organization to stay ahead of the attack.
It may make sense to think of cybersecurity and misinformation as two completely separate issues, but more and more businesses are finding out that the two are closely intertwined. Phishing attacks rely on disinformation tactics, and fake news uses technical sophistications to make their content more convincing and harder to detect. In order to stay resilient to misinformation, businesses need to incorporate these issues into larger conversations about cybersecurity across all levels and departments of the organization. Preparing now and having a response plan in place can make all the difference in maintaining your business’s reputation when false information about your brand starts making the rounds online.