In 1989 the U.S. Postal Service issued new stamps that featured four different kinds of dinosaurs. While the stamps look innocent enough, their release was the source of controversy among paleontologists, and even serves as an example of how misinformation works by making something false appear to be true.
The controversy revolves around the inclusion of the brontosaurus, which, according to scientists at that time, never existed. In 1874, paleontologist O.C. Marsh discovered the bones of what he thought was a new species of dinosaur. He called it the brontosaurus. However, as more scientists discovered similar fossils, they realized that what Marsh had found was in fact a species previous identified as an apatosaurus, which, ironically, is Greek for “deceptive lizard.” Paleontologists were therefore rightly upset to see the brontosaurus included on a stamp with real dinosaurs.
Over 30 year later, however, these stamps may have something to teach us about how disinformation works today. They show how disinformation is not simply about falsehoods — it’s about how those falsehoods are presented so as to seem true.
The stamps help illustrate this in three ways:
One of the ways something can appear to be true is when the information comes from a figure of authority. Because the stamps were officially released by the U.S. government, it gives the information contained on them the appearance of truth. Of course, no one would think the USPS is an authority on dinosaurs, and yet the very position of authority the postal service occupies seems to serve as a guarantee of the truth of what is presented. The appearance of authority, however wrongly placed it is, is often enough for us to believe something to be true.
This is a tactic used by scammers all the time. It’s the reason why you’ve probably gotten a lot of robocalls claiming to be the IRS. Phishing emails also use this tactic by spoofing the ‘from’ field and using logos of businesses and government agencies. We too often assume that, just because information appears to be coming from an authority, it must be true.
2) Truths and a Lie
Another way something false can appears true is by placing what is fake among things that are actually true. The fact that the other stamps in the collection — the tyrannosaurus, the stegosaurus, and the pteranodon — are real gives the brontosaurus the appearance of truth. By placing one piece of false information alongside recognizably true information, that piece of false information starts to look more and more like a truth.
Fake news on social media uses this tactic all the time. Phishing attacks also take advantage of this by replicating certain aspects of legitimate emails. This might include mentioning information in the news, such as COVID-19, or even including things like an unsubscribe link at the end of the email. This tactic works by using legitimate information and elements in an email to cover up what is fake.
The US Postal Service did not invent the brontosaurus: in fact, the American Museum of Natural History named a skeleton brontosaurus in 1905. Once a claim is stated as truth, it becomes very hard to dislodge. This was actually the reasoning the US Postal Service used when they were challenged: “Although now recognized by the scientific community as Apatosaurus, the name Brontosaurus was used for the stamp because it is more familiar to the general population.” Anchoring is a key aspect of disinformation, especially with regards to persistency.
Overall, what the brontosaurus stamp shows us is that our ability to discern the true from the false largely depends on how information is presented to us. Scammers and phishers have understood this for a long time. The first step in critically engaging with information online is therefore to recognize that just because something appears true does not, in fact, make it true. Given the continued rise of disinformation, this is a lesson that is more important now than ever. In fact, it is unlikely disinformation will ever become extinct.
When you want to form a new habit or learn something new, you may think the best way to start is to dedicate as much time and energy as you can to it. If you want a learn new language, for example, you may think that spending a couple of hours every day doing vocab drills will help you learn faster. Well, according to behavioral scientist BJ Fogg, you might be taking the wrong approach. Instead, it’s better to focus on what Fogg calls tiny habits: small, easy to accomplish actions that keep you engaged without overwhelming you.
Sure, if you study Spanish for three hours a day you may learn at a fast rate. The problem, however, is that too often we try to do too much too soon. By setting unrealistic goals or expecting too much from ourselves, new habits can be hard to maintain. Instead, if you only spend five minutes a day, chances are you will be able to sustain and grow the habit over a longer period of time and have a better chance of retaining what you’ve learned.
The Keys To Success
According to Fogg, in order to create lasting behavior change, three elements come together at the same moment need to come together:
- Motivation: You have to want to make a change.
- Ability: The new habit has to be achievable.
- Prompt: There needs to be some notification or reminder that tells you its time to do the behavior.
Creating and sustaining new habits requires all three of these elements to be successful — with any element missing, your new behavior won’t occur. For example, if you want go for a 5 mile run, you’re going to need a lot of motivation to do it. But if you set smaller, easy to achieve goals — like running for 5 minutes — you only need a little motivation to do the new behavior.
The other key factor is to help yourself feel successful. Spending 2 minutes reviewing Spanish tenses may not feel like a big accomplishment, but by celebrating every little win you will reinforce your motivation to continue.
The Future of Cyber Awareness
Tiny habits can not only help people learn a new language or start flossing, it can also play an important role in forming safer, more conscious online practices. Our cyber awareness training program, The PhishMarket™, is designed with these exact principles in mind. The program combines two elements, both based on Fogg’s model:
Phish Simulations: Using phish simulations help expose people to different forms of phish attacks, and motivates them to be more alert when looking at their inbox. While most programs scold or punish users who fall for a phish, The PhishMarket™ instead uses positive reinforcement to encourage users to keep going.
Micro-Lessons: Unlike most training programs that just send you informative videos and infographics, The PhishMarket™ exclusively uses short, interactive lessons that engage users and encourage them to participate and discuss what they’ve learned. By keeping the lessons short, users only need to dedicate a few minutes a day and aren’t inundated with a barrage of information all at once.
Creating smart and safe online habits is vital to our world today. But traditional training techniques are too often boring, inconsistent, and end up feeling like a chore. Instead, we believe the best way to help people make meaningful changes in their online behavior is to focus on the small things. By leveraging Fogg’s tiny habits model, The PhishMarket™ has successfully helped users feel more confident in their ability to spot phish and disinformation.
The current crisis has forced all of us to make changes that we otherwise wouldn’t have made. The upside, however, is that some of these changes may end up benefiting us well beyond the pandemic. One area that desperately needs this change is our view of cyber awareness — whether in remote environments or at the office. One report found that 91% of IT leaders simply trusted their employees to maintain safe security practices while working at home. This trust, it turns out, is misplaced, with 48% of employees saying that they are less likely to follow security practices at home. The bottom line is, if organizations want their employees to take cyber awareness in remote environments more seriously, they need to find a new way to help their employees create lasting behavior change.
Working from home creates unique challenges for their employees. They’re distracted, they’re doing their work on their personal devices, and they don’t have co-workers and managers there to motivate them. To build better cyber awareness while working from home, organizations should therefore focus on creating “micro-moments.” These micro-moments are small opportunities that contain four key elements:
- Frequent and consistent
- Involve positive reinforcement
By combing these elements, micro-moments sensitize employees to thinking about cyber awareness in their daily work, motivate them to continue learning, and keep them from thinking about cyber awareness as a burden or something that takes away their ability to get work done.
We know this works because it is the very foundation of Designed Privacy’s cyber awareness program, The PhishMarket™. The program combines phish simulations, daily micro-lessons, and detailed reporting to create behavior change that employee want to maintain. A study of The PhishMarket™ conducted by Stanford’s Peace Innovation Lab found that our program resulted in a 30% reduction in overall phish susceptibility in just four weeks, and 70% of participants said they would do the program again.
By incorporating a new a new type of cyber awareness training that focuses on creating micro-moments, organizations can help their employees create lasting behavior change, and the trust IT leaders have in their employees won’t be as misplaced as before.
A hacker got into your system, but you spot the problem before the hacker has a chance to carry out an attack. Best case scenario, right? Well, it all depends on what you do next. The government of Florence, Alabama found themselves in this exact situation, but their response is now costing them nearly $300,000. Here’s what happened:
In late May, cybersecurity report Brian Krebs received a tip that hackers known for ransomware attacked gained access to Florence’s IT system. Krebs made numerous attempts to contact city officials before finally receiving a voicemail thanking him for the tip and telling him that the city took care of the issue. However, on June 5th the city announced that a ransomware attack shut down the city’s email system. The city plans on paying the hackers the nearly $300,000 ransom to restore their system.
So, what went wrong? According to city officials, when the attack hit, the IT department was in the middle of securing approval for funds to investigate and stop the attack. Local governments are often slow to act, to be sure, but officials knew about the hacker 10 days before the attack and they still weren’t prepared. The bottom line is, given the rise in ransomware attacks on public institutions, Florence officials needed to have a detailed plan in place before an attack took place. Instead, they scrambled. And, to add insult to injury, hackers accessed to the city’s systems by stealing the Florence IT manager’s credentials through a phishing attack.
How to Beat the Hackers
So, what should you do if you know you’ve been hacked but haven’t yet been attacked? Here are just a few steps you can take:
1. Have a Plan in Place
One of the main reasons Florence was slow to act is because they waited until after the hack to figure out a game plan. Instead, the city needed to have a detailed incident response plan in place. This involves first identifying what types of attacks you are most vulnerable to. Then, you need to create a detailed step-by-step response for each type of attack, and create a team of employees responsible for carrying out each of the steps. You also need to ensure you have contingency funds readily availble to carry out the plan quickly. Finally, it is important to simulate each type of attack so that the team can practice carrying out their response. Overall, the goal of an incident response plan is to deal with potential attacks as quickly and efficiently as possible.
2. Shut Down and Isolate Infected Systems
In order to keep the hackers from accessing other systems, it is important to shut down and isolate infected systems and any devices connected to it. Remove the system from your network. Disconnect the system’s wireless and bluetooth capabilities. Any devices previously connected to the infected systems should be shut down and removed from the network. Along with keeping the hack from spreading, this also limits the hacker’s ability to encrypt or damage the infected systems.
3. Secure Your Backups
Having updated and secure backups are especially important for ransomware attacks. If a hacker encrypts your data, having a recent backup of that data could save you from having to pay the ransom. There are two important caveats, however. First, it’s important that you regular test your backups to ensure your data isn’t corrupted in the backup or restoration process. Second, keeping the copies of your backups secure and offline is essential. Otherwise, it is possible for hackers to gain access to your backups and encrypt of remove them from your systems.
4. When in Doubt, Rebuild
The hard truth is, the most reliable way to shut down a hack before an attack is to completely remove the infected systems and rebuild them from scratch. Of course, the time, resources, and personnel required to do this makes it a difficult pill to swallow for many organizations. However, it is the only way to guarantee that a hack is removed from your systems.
The Bottom Line
Spotting a hack before the attack can give you the leg up on the hackers. But, as the ransomware attack on Florence, Alabama makes clear, knowing that someone accessed into your systems is not enough. You need to have a game plan ready to go and carry it out as fast as possible. Using your time and resources to prepare for an attack now will give you piece of mind, and potentially reduce the cost of a hack later.
This week, Canada announced that, along with Microsoft and the Alliance for Securing Democracy, they will be leading an initiative to counter election interference as outlined in the Paris Call for Trust and Security in Cyberspace. The Paris Call is an international agreement outlining steps to establish universal norms for cybersecurity and privacy. The agreement has now been signed by over 550 entities, including 95 countries and hundreds of nonprofits, universities, and corporations. Nations such as Russia, China, and Israel did not sign the agreement, but one country’s absence is particularly notable—the U.S.
While the Paris Call is largely symbolic, with no legally-binding standards, it does outline 9 principles that the agreement commits to uphold and promote. Among these principles are the protection of individuals and infrastructure from cyber attack, the defense of intellectual property, and the defense of election from interference.
Non-Government Entities are Governing Cybersecurity Norms
Despite the U.S.’s absence from the agreement, many of the United States’ largest tech companies signed the agreement, such as IBM, Facebook, and Google. In addition, Microsoft says it worked especially close with the French government to write the Paris Call. The inclusion of private organizations in the agreement is a sign of the increasing importance of non-governmental entities in shaping and enforcing cybersecurity practices. The fact that Microsoft—and not the U.S.—is taking a lead on the agreement’s principle to counter election inference is a particularly strong example of how private companies are shaping the relationship between technology and democracy.
A Flawed Step, But a Step Nonetheless
Some organizations that signed the agreement, however, remain wary of private influence and how it might affect some of the principles of the Paris Call. Access Now, a non-profit dedicated to a free and open internet, raised concerns about how the agreement might give too much authority to private companies. One of the agreement’s principles, for example, encourages stakeholders to cooperate to address cyber criminality, which Access Now worries could be interpreted as a relaxing of judicial standards that would allow for an “informal exchange of data” between companies and government agencies. The non-profit also worries the principle concerning the protection of intellectual property could lead to a “heavy-handed approach,” by both private and public entities, “that could limit the flow of information online and risk freedom of expression and the right to privacy.”
On the opposite side, others have argued that the principles are more fluff than substance, fairy tales without specificity and accountability.
That being said, Paris Call is at the very least an acknowledgment that, similar to climate change, our global reliance on technology requires policy coordination on a global scale, involving not only nations, but the technology companies that are helping define our future, as well. After all, it’s hard to imagine solving any global issue without a coordinated technology supporting us. Paris Call may not be the right answer, but we probably should pick up and be part of the conversation.