Ever since Apple announced new privacy features included in the release of OS 14, Facebook has waged a war against the company, arguing that these new features will adversely effect small businesses and their ability to advertise online. What makes these attacks so “laughable” is not just Facebook’s disingenuous posturing as the protector of small businesses, but that their campaign against Apple suggests privacy and business are fundamentally opposed to each other. This is just plain wrong. We’ve said it before and we’ll say is again: Privacy is good for business.
In June, Apple announced that their new mobile operating system, OS 14, would include a feature called “AppTrackingTransparency” that requires apps to seek permission from users before tracking activity between others apps and websites. This feature is a big step towards prioritizing user control of data and the right to privacy. However, in the months following Apple’s announcement, Facebook has waged a campaign against Apple and their new privacy feature. In a blog post earlier this month, Faceboook claims that “Apple’s policy will make it much harder for small businesses to reach their target audience, which will limit their growth and their ability to compete with big companies.”
And Facebook didn’t stop there. They even took out full-page ads in the New York Times, Wall Street Journal and Washington Post to make their point.
Given the fact that Facebook is currently being sued by more than 40 states for antitrust violations, there is some pretty heavy irony in the company’s stance as the protector of small business. Yet, this is only scratches the surface of what Facebook gets wrong in their attacks against Apple’s privacy features.
While targeted online adverting has been heralded as a more effective way for business to reach new audiences and start turning a profit, the groups that benefit the most from these highly-targeted ad practices are in reality gigantic data brokers. In response to Facebook’s attacks, Apple released a letter, saying that “the current data arms race primarily benefits big businesses with big data sets.”
The privacy advocacy non-profit, Electronic Frontier Foundation, reenforced Apple’s point and called Facebook’s claims “laughable.” Start ups and small business, used to be able to support themselves by running ads on their website or app. Now, however, nearly the entire online advertising ecosystem is controlled by companies like Facebook and Google, who not only distribute ads across platforms and services, but also collect, analyze and sell the data gained through these ads. Because these companies have a strangle hold on the market, they also rake in the majority of the profits. A study by The Association of National Advertisers found that publishers only get back between 30 and 40 cents of every dollar spent on ads. The rest, the EFF says, “goes to third-party data brokers [like Facebook and Google] who keep the lights on by exploiting your information, and not to small businesses trying to work within a broken system to reach their customers.”
Because tech giants such as Facebook have overwhelming control on online advertising practices, small businesses that want to run ads have no choice but to use highly-invasive targeting methods that end up benefitting Facebook more than these small businesses. Facebook’s claim that their crusade against Apple’s new privacy features is meant to help small businesses just simply doesn’t hold water. Instead, Facebook has a vested interest in maintaining the idea that privacy and business are fundamentally opposed to one another because that position suits their business model.
At the end of the day, the problem facing small business is not about privacy. The problem is the fundamental imbalance between a handful of gigantic tech companies and everyone else. The move by Apple to ensure all apps are playing by the same rules and protecting the privacy of their users is a good step towards leveling the playing field and thereby actually helping small business grow.
This also shows the potential benefits of a federal, baseline privacy regulation. Currently, U.S. privacy regulations are enacted and enforced on the state level, which, while a step in the right direction, can end up staggering business growth as organizations attempt to navigate various regulations with different levels of requirements. In fact, last year CEOs sent a letter to congress urging the government to put in place federal privacy regulations, saying that “as the regulatory landscape becomes increasingly fragmented and more complex, U.S. innovation and global competitiveness in the digital economy are threatened” and that “innovation thrives under clearly defined and consistently applied rules.”
Lastly, we recently wrote about how consumers are more willing to pay more for services that don’t collect excessive amounts of data on their users.This suggests that surveillance advertising and predatory tracking do not build customers, they build transactions. Apple’s new privacy features open up a space for business to use privacy-by-design principles in their advertising and services, providing a channel for those customers that place a value on their privacy.
Privacy is not bad for business, it’s only bad for business models like Facebook’s. By leveling the playing field and providing a space for new, privacy-minded business models to proliferate, we may start to see more organizations realize that privacy and business are actually quite compatible.
By now, most everyone has heard about the threat of misinformation within our political system. At this point, fake news is old news. However, this doesn’t mean the threat is any less dangerous. In fact, over the last few years misinformation has spread beyond the political world and into the private sector. From a fake news story claiming that Coca-Cola was recalling Dasani water because of a deadly parasite in the bottles, to false reports that an Xbox killed a teenager, more and more businesses are facing online misinformation about their brands, damaging the reputations and financial stability of their organizations. While businesses may not think to take misinformation attacks into account when evaluating the cyber threat landscape, it’s more and more clear misinformation should be a primary concern for organizations. Just as businesses are beginning to understand the importance of being cyber-resilient, organizations need to also have policies in place to stay misinformation-resilient. This means organization need to start taking both a proactive and a reactive stance towards future misinformation attacks.
Perhaps the method of disinformation we are all most familiar with is the use of social media to quickly spread false or sensationalized information about a person or brand. However, there are a number of different guises disinformation can take. Fraudulent domains, for example, can be used to impersonate companies in order to misrepresent brands. Attackers also create copy cat sites that look like your website, but actually contain malware that visitors download when the visit the site. Inside personnel can weaponize digital tools to settle scores or hurt the company’s reputation — the water-cooler rumor mill now can now play out in very public and spreadable online spaces. And finally, attackers can create doctored videos called deep fakes that can create convincing videos of public figures saying things on camera they never actually said. You’ve probably seen deepfakes of politicians like Barak Obama or Nancy Pelosi, but these videos can also be used to impersonate business leadership that are shared online or circulated among staff.
With all of the different ways misinformation attacks can be used against businesses, its clear organizations need to be prepared to stay resilient in the face of any misinformation that appears. Here are 5 steps all organizations should take to build and maintain a misinformation-resilient business:
1. Monitor Social Media and Domains
Employees across various departments of your organization should be constantly keeping their ear to the ground by closely monitoring for any strange or unusual activity by and about your brand. Your marketing and social media team should be regularly keeping an eye on any chatter online about the brand and evaluate the veracity of claims being made, where they originate, and how widespread is the information is being shared.
At the same time, your IT department should be continuously looking for new domains that mention or closely resemble your brand. It’s common for scammers to create domains that impersonate brands in order to spread false information, phish for private information, or just seed confusion. The frequency of domain spoofing has sky-rocketed this year, as bad actors take advantage of the panic and confusion surrounding the COVID-19 pandemic. When it comes to spotting deepfakes, your IT team should invest in software that can detect whether images and recordings have been altered
Across all departments, your organization needs to keep an eye out for any potential misinformation attacks. Departments also need to be in regular communication with each other and with business leadership to evaluate the scope and severity of threats as soon as they appear.
2. Know When You Are Most Vulnerable
Often, scammers behind misinformation attacks are opportunists. They look for big news stories, moments of transition, or when investors will be keep a close eye on an organization in order to create attacks with the biggest impact. Broadcom’s shares plummeted after a fake memorandum from the US Department of Defense claimed an acquisition the company was about to make posed a threat to national security. Organization’s need to stay vigilant for moments that scammer can take advantage of, and prepare a response to any potential attack that could arise.
3. Create and Test a Response Plan
We’ve talked a lot about the importance of having a cybersecurity incident response plan, and the same rule is true for responding to misinformation. Just as with a cybersecurity attack, you shouldn’t wait to figure out a response until after attack has happened. Instead, organizations need to form a team from various levels within the company and create a detailed plan of how to respond to a misinformation campaign before it actually happens. Teams should know what resources will be needed to respond, who internally and externally needs to be notified of the incident, and which team members will respond to which aspect of the incident.
It’s also important to not just create a plan, but to test it as well. Running periodic simulations of a disinformation attack will not only help your team practice their response, but can also show you what areas of the response aren’t working, what wasn’t considered in the initial plan, and what needs to change to make sure your organization’s response runs like clock work when a real attack hits. Depending on the organization, it may make sense to include disinformation attacks within the cybersecurity response plan or to create a new plan and team specifically for disinformation.
4. Train Your Employees
Employees throughout the organizations should also be trained to understand the risks disinformation can pose to the business, and how to effectively spot and report any instances they may come across. Employees need to learn how to question images and videos they see, just as they should be wary links in an email They should be trained on how to quickly respond internally to disinformation originated from other insiders like disgruntled employees, and key personnel need to be trained on how to quickly respond to disinformation in the broader digital space.
5. Act Fast
Putting all of the above steps in place will enable organizations to take swift action again disinformation campaigns. Fake news spreads fast, so an organizations need to act just as quickly. From putting your response plan in motion, to communicating with your social media follow and stake-holders, to contacting social media platforms to have the disinformation content removed all need to happen quickly for your organization to stay ahead of the attack.
It may make sense to think of cybersecurity and misinformation as two completely separate issues, but more and more businesses are finding out that the two are closely intertwined. Phishing attacks rely on disinformation tactics, and fake news uses technical sophistications to make their content more convincing and harder to detect. In order to stay resilient to misinformation, businesses need to incorporate these issues into larger conversations about cybersecurity across all levels and departments of the organization. Preparing now and having a response plan in place can make all the difference in maintaining your business’s reputation when false information about your brand starts making the rounds online.
The dangers of online disinformation is by now common knowledge, but that hasn’t seemed to stop its spread. The current COVID-19 crisis has highlighted both the pervasiveness of disinformation and the danger it poses to society. We are in a situation where we need to rely on information for our health and safety. Yet, when accurate and false information sit side-by-side online, it is extremely difficult to know what to trust. The Director-General of the World Health Organization recognized this problem as early as February when he said that, alongside the pandemic, we are also fighting an “infodemic.” From articles, videos, and tweets discounting the severity of the virus to full-blown conspiracy theories, COVID-19 disinformation is everywhere.
Despite the steps social media sites have taken to combat disinformation about COVID-19, an Oxford study found that 88% of all false or misleading information about the coronavirus appear on social media sites. Another report found that, out of over 49 million tweets about the virus, nearly 38% contained misleading or manipulated content. The reason is largely because social media sites like Twitter and Facebook are trying to put a Band-Aid on a systemic issue. “They’ve built this whole ecosystem that is all about engagement, allows viral spread, and hasn’t ever put any currency on accuracy,” said Carl Bergstrom, a Professor at the University of Washington. Simply put, the root of disinformation is not just based on the content being shared, but also on the deep-seated practices used by social media to keep users engaged.
How Social Media Platforms Can Fix This
A new report by The German Marshall Fund takes the problem of disinformation head on and outlines what social media platforms can do to combat the problem and foster reliable and accurate reporting. Here are just a few of the steps the report recommends:
Design With “Light Patterns”
Websites and social media platforms often use “dark pattern” interfaces and defaults to manipulate users and hide information about how the site operates. Light pattern design, therefore, involves transparency about how the site operates. This involves using defaults that favor transparence, and even using labeling to shows the source of information, whether the account posting the content is verified or not, and even if audio and visual content has been altered.
While all social media platforms have in-depth rules for user activity, these terms are generally inconsistently applied and enforced. By setting a transparent standard and consistently enforcing that standard, social media platforms can more successfully combat disinformation and other toxic online activity.
Instead of using government policy to regulate content, the U.S. should set up a technology-neutral agency to hold platforms accountable for a code of conduct focused on practices such as light pattern designs. By focusing on overseeing positive platform practices, the government can avoid having a hand in decisions about what content is “good” or “bad.”
What You Can Do Now
However helpful these changes to social media platforms are, the truth is we aren’t there yet. Fake and fiction stand side by side online, with no immediate way to discern which is which. When taking in information, it is up to you to figure out what is reputable and what is inaccurate. With the large amount of COVID-19 disinformation swarming the internet, its more important than ever to use our critical skills in two specific ways.
Our personal world views, biases, and emotions shape how we take in information. When looking at content online, it’s important to think about your own motivations for believing something to be true or not. Ask yourself why you think something is true or false. Is it largely because you want to believe it or disbelieve it? When we read something online that makes us angry, there is something satisfying about sharing that anger with others. Before sharing content, ask whether your desire to share it is an emotional response or because the information is accurate and important. If it’s predominately coming from your emotions, reconsider if it’s worth sharing.
Be Critical of All Content
In general, we should initially read everything with a degree of skepticism. Doubt everything and be your own fact checker. Look at other websites reporting the same information. Are any of them reliable? Are they all citing the same sources, and, if so, is that source reputable? Don’t share an article based solely on the headline. Read the full article to understand if the headline is based on fact or is just speculation. Look at what sort of language the article is using. Is it largely opinion based? Does it cite reputable sources? Is it written in a way that is meant to evoke an emotional response?
Months into the COVID-19 pandemic, we understand how our in-person interactions can have a negative impact on ourselves and those around us, but it’s important to also understand how our interactions online can lead to similar outcomes. Given the stupefying amount of disinformation about the coronavirus circulating online, it’s more important now than ever to be think critically about what information you’re consuming and be aware about what you say and share online.
Social media was designed to connect people. At least, that’s what those behind these sites never stop of telling us. They’re meant to create, as Mark Zuckerberg says, “a digital townsquare.” Yet, as it turns out,the effect social media has on us seems to actuallybe going in the opposite direction. Social media is making us less social.
Last year a study by theUniversity of Pittsburgh and West Virginia University was published showinglinks between social media use and depression. And now the same team has releasednew study that takes things a step further. The study found that not only does social media lead to depression, but actually increases the likelihood of social isolation. According to the study’s findings, for every 10% rise in negative experience on social media, there was a 13% increase in loneliness. And what’s more, they found that positive experiences online show no link to an increase in feelings of social connections.
These two studies make clear what we may already feel: the form in which social media connects us ends up leaving us more isolated. And, as strange as it may sound,this could have a profound impact on how we view our privacy. At root, privacy involves the maintenance of a healthy self-identity. And this identity doesn’t form in a vacuum. Instead, it is shaped through our relationship to a community of people.
So, to the extent social media is isolating, it is also desensitizing to our notions of ourselves and to the world which surrounds us. When we lose a sense of boundaries in relation to a community then anything, including the value of privacy, can go out the window.
And this can turn into a vicious cycle: the lonelier you feel, the more you’re likely to seek validation on social media. Yet, the more you seek that validation, the more that sense of loneliness rears its head. And often seeking this type of social validation leads to privacy taking a back seat. Earlier we wrote about an increase in the success of romance scams, which is just one example of how a sense of loneliness can have the effect of corroding privacy practices.
While these studies don’t exactly mean we should go off the grid, it’s clear that to understand and value ourselves, we need at times to detach from technology. And, from a business perspective, there are lessons to be learned here too. While technology can make communication more convenient, that shouldn’t translate to having every conversation through a digital platform. Pick up the phone. Have lunch with a customer. Talk to them instead of selling them. Having more personalized conversation will not only translate to stronger business relationships but may even have an effect on the value placed on privacy as well.
Two weeks ago, Mark Zuckerberg penned an essay detailing Facebook’s shift towards a more privacy-focused platform. “As I think about the future of the internet,” he writes, “I believe a privacy-focused communications platform will become even more important than today’s open platforms.” For Zuckerberg, this predominantly means focusing efforts more on his private messaging services (Facebook Messenger, Instagram Direct, and Whatsapp) by including end-to-end encryption across all platforms.
But given mirad privacy scandals plaguing Facebook over the past few years, it is important to look critically at what Zuckerberg is outlining. Many of the critiques of Zuckerberg that have been written focus primarily on the monopolistic power-grab that he introduces under the term “interoperability.” For Zuckerberg, this means integrating private communications across all of Facebook’s messaging platforms. From a security perspective, the idea is to be able to standardize end-to-end encryption across a diversity of messaging platforms (including SMS), but, as the MIT Technology Review points out, this amounts to little more than a heavy-handed centralization of power: “If his plan succeeds, it would mean that private communication between two individuals will be possible when Mark Zuckerberg decides that it ought to be, and impossible when he decides it ought not to be.”
However, without downplaying this critique, what seems just as if not more concerning is concept of privacy that Zuckerberg is advocating for. In the essay, he speaks about his turn towards messaging platforms as a shift from the town square to the “digital equivalent of a living room,” in which our interactions are more personal and intimate. Coupled with end-to-end encryption, the idea is that Facebook will create a space in which our communications are kept private.
But they won’t, because Zuckerberg fundamentally misrepresents how privacy works. Today, the content of what you say is perhaps the least important aspect of your digital identity. Instead, it is all about the metadata. In terms of communication, the who, the when, and the where can tell someone more about you then simply the what. Digital identities are constructed less by what we think and say about ourselves, and far more through a complex network of information that moves and interacts with other elements within that network. Zuckerberg says that “one great property of messaging services is that even as your contacts list grows, your individual threads and groups remain private,” but who, for example, has access to our contact lists? These are the type of questions that Zuckerberg sidesteps in his essay, but are the ones that show how privacy actually functions today.
Like a living room, we can concede that end-to-end encryption will give users more confidence that their messages will only be seen by the person or people within that space. But digital privacy does not function on a “public vs. private sphere” model. If it is a living room, it has the equivalent of a surveillance team stationed outside, recording who enters, how long they stay there for, how that room is accessed, etc. For all his failings, we would be wrong to assume that Zuckerberg is ignorant of the importance of metadata. In large part he has built is fortune on it. What we see in his essay, then, is little more than a not-so-subtle misdirect.
What businesses now need to realize is that such high-profile scandals will likely have direct impacts not simply in Silicon Valley, but on a national and even global scale.
In fact, on October 22, Google, Facebook, Apple and Microsoft are endorsing a federal privacy law based upon a framework developed by the Information Technology Industry Council.
To help businesses better understand the impact privacy regulation may have for them, we have put together the top three implications these new regulations could have on businesses in the coming months.
Consumers will play an active role in how companies collect and use personal information
Perhaps the strictest aspect of California’s new regulations is the central role consumers will now play in deciding how (or if at all) their information is used. Consumers now have the right to request from companies not only what information is being collected (even allowing the consumer to request an accessible copy of that data), but also for what purpose. Moreover the law allows consumers to request that companies deleted their personal information and can even opt-out of the sale of such information.
A broader definition of protected private data.
The California Privacy Act substantially broadens what is considered ‘personal information’ and therefore increases the scope of regulations beyond what we generally consider tech companies. Under the new regulations, ‘personal information’ now includes the consumers’ internet activity, biometric data, education and employment information, as well as information on the consumer’s purchases and personal property. Broadening the definition of personal information therefore implicates far more businesses than the likes of Facebook and Google. Now, any company that collects or uses such consumer data will be subject to regulation.
Targeted advertising will become less effective
The effectiveness of targeted online advertising campaigns relies on the extreme specificity enabled by access to consumer data. As Dipayan Ghosh of the Harvard Business Review points out, these regulations will have any impact on any business that makes use of online advertising. Targeted campaigns will become less precise and may therefore “significantly cut into the profits [ ] firms currently enjoy, or force adjustments to [ ] revenue-growth strategies.”
Any business that has customers in California need to be seriously considering how they will now comply will these new regulations. What’s more, discussions of putting in place federal regulations are well underway and it is possible that California’s new private information laws could form the basis of such regulations. It is therefore in the best interest of any business that makes use of consumer data to seriously consider what impact such regulations could have in the coming months and years.
What should businesses be doing now, even if they don’t fall into under California or GDPR privacy regulations?
Know what data you are capturing and where it is stored. Review your data flows in your customer, accounting, employee and other databases so you know what you are capturing, the reason you are capturing it and where you are storing it. Keeping an accurate data inventory is critical. And, it makes good sense.
Be Transparent to your users with what you are doing with their data. Review your privacy policies. Make sure they are free of legalese and clearly explains what you will doing with the data, who (if any) will you share the data with and what rights the user has if they want to have the data changed or removed. Try not to think of this as a compliance exercise. Think of it as customer engagement. By doing so, you can create a better relationship with your customers because you show that you respect them and their information.
Ask before you Capture — Where possible, get the user’s consent prior to capturing the data. You will have better customers if they opt in to the relationship rather than finding themselves in one.
Privacy does not have to be viewed as compliance or even a restriction on doing business. In fact, successful businesses going forward will use privacy as a tool for increased customer engagement.