Maybe the biggest misconception about forming new habits is that the biggest factor for success is the motivation to change. We often imagine that as long as we want to make a change in our lives, we have the power to do it. In fact, motivation is actually the least reliable element making behavior changes. The hard truth is that simply wanting to make a change is far from enough.
The reason? Motivation isn’t a static thing, it comes and goes in waves. It’s therefore tough to keep our motivation high enough to lead to lasting behavior change. Take the response to the COVID-19 pandemic, for example. When it appeared in the U.S, we were highly motivated to socially distance. As time went on, however, more and more people started to take risks and go out more. The reason isn’t because the dangers were any less present, but because our motivation to stay inside started to wane. The point is, if the sole component to any behavior change is motivation, once that motivation starts to diminish, so will the new habit.
Of course, we have to at some level want to make a change, but we also have to realize that it’s simply not enough. Instead, we need to rely more on starting with changes that requires the least amount of motivate necessary for it to occur. This is the idea behind BJ Fogg’s Tiny Habits that we wrote about last week. If you want to start reading more, it might be tempting to try reading a chapter or two every day. But more often than not, you’re not going to be motivated to keep that up for long. Instead, if your goal is just to read one paragraph of a day a couple times a day, you’re far more likely to keep up the new habit. Then, over time, you’ll find you need less motivation to read more and more, until you don’t even think about it any more.
This can be a hard pill to swallow. We like to believe that we can do anything we set our minds to, and it’s a little disheartening to think we don’t have as much control over our motivation as we might prefer. When looked at from a different angle, however, understanding this fact allows us to focus on what we can control: setting achievable goals and rewarding ourselves when we met them. Focusing on that rather than our inability to keep our motivation high will lead to more successful behavior change.
The dangers of online disinformation is by now common knowledge, but that hasn’t seemed to stop its spread. The current COVID-19 crisis has highlighted both the pervasiveness of disinformation and the danger it poses to society. We are in a situation where we need to rely on information for our health and safety. Yet, when accurate and false information sit side-by-side online, it is extremely difficult to know what to trust. The Director-General of the World Health Organization recognized this problem as early as February when he said that, alongside the pandemic, we are also fighting an “infodemic.” From articles, videos, and tweets discounting the severity of the virus to full-blown conspiracy theories, COVID-19 disinformation is everywhere.
Despite the steps social media sites have taken to combat disinformation about COVID-19, an Oxford study found that 88% of all false or misleading information about the coronavirus appear on social media sites. Another report found that, out of over 49 million tweets about the virus, nearly 38% contained misleading or manipulated content. The reason is largely because social media sites like Twitter and Facebook are trying to put a Band-Aid on a systemic issue. “They’ve built this whole ecosystem that is all about engagement, allows viral spread, and hasn’t ever put any currency on accuracy,” said Carl Bergstrom, a Professor at the University of Washington. Simply put, the root of disinformation is not just based on the content being shared, but also on the deep-seated practices used by social media to keep users engaged.
How Social Media Platforms Can Fix This
A new report by The German Marshall Fund takes the problem of disinformation head on and outlines what social media platforms can do to combat the problem and foster reliable and accurate reporting. Here are just a few of the steps the report recommends:
Design With “Light Patterns”
Websites and social media platforms often use “dark pattern” interfaces and defaults to manipulate users and hide information about how the site operates. Light pattern design, therefore, involves transparency about how the site operates. This involves using defaults that favor transparence, and even using labeling to shows the source of information, whether the account posting the content is verified or not, and even if audio and visual content has been altered.
While all social media platforms have in-depth rules for user activity, these terms are generally inconsistently applied and enforced. By setting a transparent standard and consistently enforcing that standard, social media platforms can more successfully combat disinformation and other toxic online activity.
Instead of using government policy to regulate content, the U.S. should set up a technology-neutral agency to hold platforms accountable for a code of conduct focused on practices such as light pattern designs. By focusing on overseeing positive platform practices, the government can avoid having a hand in decisions about what content is “good” or “bad.”
What You Can Do Now
However helpful these changes to social media platforms are, the truth is we aren’t there yet. Fake and fiction stand side by side online, with no immediate way to discern which is which. When taking in information, it is up to you to figure out what is reputable and what is inaccurate. With the large amount of COVID-19 disinformation swarming the internet, its more important than ever to use our critical skills in two specific ways.
Our personal world views, biases, and emotions shape how we take in information. When looking at content online, it’s important to think about your own motivations for believing something to be true or not. Ask yourself why you think something is true or false. Is it largely because you want to believe it or disbelieve it? When we read something online that makes us angry, there is something satisfying about sharing that anger with others. Before sharing content, ask whether your desire to share it is an emotional response or because the information is accurate and important. If it’s predominately coming from your emotions, reconsider if it’s worth sharing.
Be Critical of All Content
In general, we should initially read everything with a degree of skepticism. Doubt everything and be your own fact checker. Look at other websites reporting the same information. Are any of them reliable? Are they all citing the same sources, and, if so, is that source reputable? Don’t share an article based solely on the headline. Read the full article to understand if the headline is based on fact or is just speculation. Look at what sort of language the article is using. Is it largely opinion based? Does it cite reputable sources? Is it written in a way that is meant to evoke an emotional response?
Months into the COVID-19 pandemic, we understand how our in-person interactions can have a negative impact on ourselves and those around us, but it’s important to also understand how our interactions online can lead to similar outcomes. Given the stupefying amount of disinformation about the coronavirus circulating online, it’s more important now than ever to be think critically about what information you’re consuming and be aware about what you say and share online.
Your organization’s cybersecurity team is on edge in the best of times. The bad guys are always out there and, like offensive lineman in American Football who are only noticed when they commit a penalty, cybersecurity personal are usually noticed only when something goes wrong. Now, as the game has changed, the quick transition to work from home, combined with the plethora of COVID-19 scams, phishing, and malware drowning the cybersecurity threat intel sources—not to mention the isolation—may leave your team at a chronically high stress level. And cybersecurity is far more than just your technical safeguards. At the end of the day, the stress your team feels could lead them to put their focus in the wrong place and let their guard down.
Here’s what you can do about it
- Incorporate cybersecurity as a part of your overall business strategy process – now is the time to recognize cybersecurity as a key part of the organization’s strategy and that enables you to drive your mission forward.
- Be a part of the cybersecurity planning process – be active, listen, and understand how your team is handling this.
- Leverage your bully pulpit – communicate to the staff about the key areas your cybersecurity team is focused on and the role they are playing to keep the organization secure while everyone is working from home.
- Check in – take the time to just check in and see how they are doing. A little goes a long way.
The truth is, when it comes to cybersecurity, your first and most effective line of defense is not your firewall or encryption protocol. It’s the people that form a team dedicated to protecting your organization. Working from home poses unique cybersecurity challenges, and it’s up to you to make sure your team is given the attention they need to do their job well.
As the COVID-19 pandemic continues, the world has turned to the tech industry to help mitigate the spread of the virus and, eventually, help transition out of lockdown. Earlier this month, Apple and Google announced that they are working together to build contact-tracing technology that will automatically notify users if they have been in proximity to someone who has tested positive for COVID-19. However, reports show that there is a severe lack of evidence to show that these technologies can accurately report infection data. Additionally, the question arises as to the efficacy of these types of apps to effectively assist the marginal populations where the disease seems to have the largest impact. Combined with the invasion of privacy that this involves, the U.S. needs to more seriously interrogate whether or not the potential rewards of app-based contact tracing outweigh the obvious—and potentially long term— risks involved.
First among the concerns is the potential for the information collected to be used to identify and target individuals. For example, in South Korea, some have used the information collected through digital contact tracing to dox and harass infected individuals online. Some experts fear that the collected data could also be used as a surveillance system to restrict people’s movement through monitored quarantine, “effectively subjecting them to home confinement without trial, appeal or any semblance of due process.” Such tactics have already been used in Israel.
Apple and Google have taken some steps to mitigate the concerns over privacy, claiming they are developing their contact tracing tools with user privacy in mind. According to Apple, the tool will be opt-in, meaning contact tracing is turned off by default on all phones. They have also enhanced their encryption technology to ensure that any information collected by the tool cannot be used to identify users, and promise to dismantle the entire system once the crisis is over.
Apple and Google are not using the phrase “contact tracing” for their tool, instead branding it as “exposure notification.” However, changing the name to sound less invasive doesn’t do anything to ensure privacy. And despite the steps Apple and Google are taking to make their tool more private, there are still serious short and long term privacy risks involved.
In a letter sent to Apple and Google, Senator Josh Hawley warns that the impact this technology could have on privacy “raises serious concern.” Despite the steps the companies have taken to anonymize the data, Senator Hawley points out that by comparing de-identified data with other data sets, individuals can be re-identified with ease. This could potentially create an “extraordinarily precise mechanism for surveillance.”
Senator Hawley also questions Apple and Google’s commitment to delete the program after the crisis comes to an end. Many privacy experts have echoed these concerns, worrying what impact these expanded surveillance systems will have in the long term. There is plenty of precedent to suggest that relaxing privacy expectations now will change individual rights far into the future. The “temporary” surveillance program enacted after 9/11, for example, is still in effect today and was even renewed last month by the Senate.
Contact tracing is often heralded as a successful method to limit the spread of a virus. However, a review published by a UK-based research institute shows that there is simply not enough evidence to be confident in the effectiveness of using technology to conduct contact tracing. The report highlights the technical limitations involved in accurately detecting contact and distance. Because of these limitations, this technology might lead to a high number of false positives and negatives. What’s more, app-based contact tracing is inherently vulnerable to fraud and cyberattack. The report specifically worries about the potential for “people using multiple devices, false reports of infection, [and] denial of service attacks by adversarial actors.”
Technical limitations aside, the effectiveness of digital contact tracing also requires both large compliance rate and a high level of public trust and confidence in this technology. Nothing suggests Apple and Google can guarantee either of these requirements. The lack of evidence showing the effectiveness of digital contact tracing puts into question the use of such technology at the cost serious privacy risks to individuals.
If we want to appropriately engage technology, we should determine the scope of the problem with an eye towards assisting the most vulnerable populations first and at the same time ensure that the perceived outcomes can be obtained in a privacy perserving manner. Governments need to lay out strict plans for oversight and regulation, coupled with independent review. Before comprising individual rights and privacy, the U.S. needs to thoroughly asses the effectiveness of this technology while implementing strict and enforceable safeguards to limit the scope and length of the program. Absent that, any further intrusion into our lives, especially if the technology is not effective, will be irreversible. In this case, the cure may well be worse than the disease.
This week, reports surfaced that the Small Business Association’s COVID-19 loan program experienced an unintentional data breach last month, leaving the personal information of up to 8,000 applicants temporarily exposed. This is just the latest in a long line of COVID-19 cyber-attacks and exposures since the pandemic began.
The effected program is the SBA’s long-standing Economic Injury Disaster Loan program (EIDL), which congress recently expanded to help small businesses effected by the COVID-19 crisis. The EIDL is separate from the new Paycheck Protection Program, which is also run by the SBA.
According to a letter sent to affected applicants, on March 25th the SBA discovered that the application system exposed personal information to other applicants using the system. The information potentially exposed include names, addresses, phone numbers, birth dates, email addresses, citizenship status, insurance information, and even social security numbers of applicants
According to the SBA, upon discovering the issues they “immediately disabled the impacted portion of the website, addressed the issue, and relaunched the application portal.” All businesses affected by the COVID-19 loan program breach were eventually notified by the SBA and offered a year of free credit monitoring.
A number of recent examples show that the severe economic impact of the pandemic has left the SBA scrambling. Typically, the SBA is meant to issue funds within three days of receiving an application. However, with more than 3 million applications flooding in, some have had to wait weeks for relief.
The unprecedented number of applications filed, coupled with the fact the SBA is smallest major federal agency — suffering a 11% funding cut in the last budget proposal — likely contributed to the accidental exposure of applicant data. However, whether accidental or not, a data breach is still a data breach. It’s important that all organizations take the time to ensure their systems and data remain secure, and that mistakes do not lead to more work and confusing during a time of crisis.
The current onslaught of cyberattacks related to the COVID-19 pandemic continued this week. Tuesday night, reports surfaced that attackers publicized over 25,000 emails and passwords from the World Health Organization, The Gates Foundation, and other organizations working to fight the current COVID-19 pandemic. What’s more, this new data dump starkly shows how easily data breaches related to COVID-19 can fuel disinformation campaigns.
The sensitive information was initially posted online over the course of Sunday and Monday, and quickly spread to various corners of the internet often frequently by right-wing extremists. These groups rapidly used the breached data to create widespread harassment and disinformation campaigns about the COVID-19 pandemic. One such group posted the emails and passwords to their twitter page and pushed a conspiracy theory that the information “confirmed that SARS-Co-V-2 was in fact artificially spliced with HIV.”
A significant portion of the data may actually be out of date and from previous data breaches. In a statement to The Washington Post, The Gates Foundation said they “don’t currently have an indication of a data breach at the foundation.” Reporting by Motherboard also found that much of the data involved matches information stolen in previous data breaches. This indicates that at least some of the passwords circulating are not linked to the organizations’ internal systems unless employees are reusing passwords.
However, some of the information does appear to be authentic. Cybersecurity expert Robert Potter was able to use some of the data to access WHO’s internal computer systems and said that the information appeared to be linked to a 2016 breach of WHO’s network. Potter also noted a trend of disturbingly poor password security at WHO. “Forty-eight people have ‘password’ as their password,” while others simply used their own first names or “changeme.”
Whether the majority of the information is accurate or not, it does not change the fact that the alleged breach has successfully fueled more disinformation campaigns about the COVID-19 pandemic. In the past few weeks, many right-wing extremist groups have used disinformation about the pandemic to spread further fear, confusion in the hopes of seeding more chaos.
This episode starkly shows how data breaches can cause damage beyond the exposure of sensitive information. They can also be weaponized to spread disinformation and even lead to political attacks.