Earlier this month, a study by the University College London identified the top 20 security issues and crimes likely to be carried out with the use of artificial intelligence in the near future. Experts then ranked the list of future AI crimes by the potential risk associated with each crime. While some of the crimes are what you might expect to see in a movie — such as autonomous drone attacks or using driverless cars as a weapon — it turns out 4 out of the 6 crimes that are of highest concern are less glamorous, and instead focused on exploiting human vulnerabilities and bias’.
Here are the top 4 human-factored AI threats:
The ability for AI to fabricate visual and audio evidence, commonly called deepfakes, is the overall most concerning threat. The study warns that the use of deepfakes will “exploit people’s implicit trust in these media.” The concern is not only related to the use of AI to impersonate public figures, but also the ability to use deepfakes to trick individuals into transferring funds or handing over access to secure systems or sensitive information.
Other high-risk, human-factored AI threats include scalable spear-phishing attacks. At the moment, phishing emails targeting specific individuals requires time and energy to learn the victims interests and habits. However, AI can expedite this process by rapidly pulling information from social media or impersonating trusted third parties. AI can therefore make spear-phishing more likely to succeed and far easier to deploy on a mass scale.
Similarly, the study warns that AI can be used to harvest a mass information about individuals, identify those most vulnerable to blackmail, then send tailor-crafted threats to each victim. These large-scale blackmail schemes can also use deepfake technology to create fake evidence against those being blackmailed.
Lastly, the study highlights the risk of using AI to author highly convincing disinformation and fake news. Experts warn that AI will be able to learn what type of content will have the highest impact, and generate different versions of one article to be publish by variety of (fake) sources. This tactic can help disinformation spread even faster and make the it seem more believable. Disinformation has already been used to manipulate political events such as elections, and experts fear the scale and believability of AI-generated fake news will only increase the impact disinformation will have in the future.
The results of the study underscore the need to develop systems to identify AI-generated images and communications. However, that might not be enough. According to the study, when it comes to spotting deepfakes, “[c]hanges in citizen behaviour might [ ] be the only effective defence.” With the majority of the highest risk crimes being human-factored threats, focusing on our own ability to understand ourselves and developing behaviors that give us the space to reflect before we react may therefore become to most important tool we have against these threats.
We understand the risks of having our email credentials compromised. If it happens, we know to change our login information as quickly as possible to ensure whoever got in can’t continue to access our emails. The problem, however, is that there is a very simple way for hackers to continue to access the content of your inbox even after you change your password: auto-forwarding. If someone gains access to your email, they can quickly change your configurations to have every single email sent to your inbox forwarded to the hacker’s personal account as well.
The most immediate concern with unauthorized auto-forwarding is the ability for a hacker to view and steal any sensitive or proprietary information sent to your inbox. However the risks associated with this form of attack have far greater ramifications. By setting up auto-forwarding, phishers can carry out reconnaissance efforts in order to carry out more sophisticated social engineering scams in the long-term.
For example, auto-forwarding can help hackers carry out spear phishing attacks — a form of phishing where the scammer tailors phishing emails to target specific individuals. By learning how the target communicates with others and what type of email they are most likely to respond to, hackers can create far more convincing phish and increase the chance that their attack will be a success.
Bad actors can also utilize auto-forwarding to craft highly-sophisticated business email compromise (BEC) attacks. BEC is a form of social engineering in which a scammer impersonates vendors or bosses in order to trick employees into transfering funds to the wrong place. If the scammer is using auto-forward, they may be able to see specific details about projects or services being carried out and gain a better sense of the formatting, tone, and style of invoices or transfer requests This can then be used to create fake invoices for actual services that require payment.
How to protect yourself from unauthorized auto-forwarding
There are, however, a number of steps you and your organizations can take to prevent hackers from setting up auto-forwarding. The most obvious is to prevent access to your email account in the first place. Multi-factor authentication, for example, places an extra line of defense between the hacker and your inbox. However, every organization should also disable or limit users’ ability to set up auto-forwarding. Some email providers allow organizations to block auto-forward by default. Your IT or security team can then manually enable auto-forwarding for specific employee’s when requested for legitimate reasons and for a defined time period.
When it comes to the risks with auto-fowarding, the point is that the more the hackers can learn about your organizations and your employees, the more convincing their future phishing and BEC attacks will be. By putting safeguards in place that help prevent access to email accounts and block auto-forwarding, you can lower the risk that a bad actor will gain information about your organization and carry out sophisticated social engineering attacks.
By now, most people have heard about the hack of high-profile Twitter accounts that took place on July 15th. To carry out the attack, the perpetrators used a social engineering tactic called “vishing” — short for voice phishing — in which attackers use phone calls rather than email or messages to trick individuals into giving out sensitive information. The incident once again highlights the risks associated with human rather than technical vulnerabilities, and shows Twitter’s shortcomings in managing employee access controls.
On the day of the attack, big names like President Barack Obama, Elon Musk, Jeff Bezos , and Joe Biden all tweeted a message asking users to send them bitcoin with the promise of being sent back double the amount. Of course, this turned out to be a scam and the tweets were quickly removed, but not before the hacker received over $100,000 worth of bitcoin.
According to a statement by Twitter, the attackers gained access to the company’s internal systems the same day as the attack. By using “a phone spear phishing attack,” — commonly known as vishing — the scammers tricked lower-level employees into revealing credentials that allowed them access to Twitter’s internal system. This access, however, did not allow the attackers to immediately access user accounts. However, once inside they were able to carry out additional attacks on other employees who did have access to Twitter’s support controls. From there, the hackers had access to every account on Twitter and could make important changes, including changing the email address associated with an account.
While vishing is not the most well known or most frequent form of social engineering attack, the Twitter hack shows just how dangerous it can be. It’s the one type of attack that requires no code, email, or usb device to carry out. However, there are key protections businesses can use, and that should have been in place at Twitter. First among them is to have explicit policies and safeguards for disclosing credentials and wiring funds. Individual employees should not be allowed to give out information on their own — even if they think they are giving it to a trusted colleague. Instead, employees should have to communicate with a third-party within the company who can verify an employee’s identity before sharing credentials.
Secondly, Twitter needed to have stricter access controls in place, throughout all levels of the company. While Twitter claims that “access to [internal] tools is strictly limited and is only granted for valid business reasons,” this was clearly not the case on July 15th. And even though the employees that were initially exploited did not have full access to user accounts, the hackers were able to leverage the limited access they had to then gain even more advanced and detailed permission rights. Businesses should therefore ensure all employees, even with limited access, have the proper cyber awareness training and undergo simulations of various social engineering attacks.
This was a striking reminder of how important each person on our team is in protecting our service. We take that responsibility seriously and everyone at Twitter is committed to keeping your information safe.
Lastly, when it comes to vishing, it’s important to use techniques similar to those used to spot other types of scams. When getting a call, the first thing to do is simply take a breathe. This will interrupt automatic thinking and allow you to be more alert. You also need to make sure you are actually talking to who you think you are. Scammer’s can make a call look like it’s coming from a trusted number, so even if you get a call from someone in your contacts it could still be a scammer. That’s why it’s important to focus on what the phone call or voicemail is trying to convey. Is it too urgent? Are they probing for sensitive or personal information about you or others? Is it relevant to what you already know? If anything at all seems off, be extra cautious before talking about that could be damaging.
While you may feel comfortable spotting a phishing attack, hackers and scammers are constantly looking for new ways to trick us. And, as the Twitter hack shows, they are very good at what they do. It’s better to be too cautious and assume you are at risk of being scammed, then think it could never happen to you. Because it can.
Earlier this week we wrote about the cost of human-factored, malicious cyber attacks. However, there are also other threats that can lead to a malicious attack and data breach. According to this year’s Cost of a Data Breach Report, the stolen or compromised credentials tied for the most frequent cause of malicious data breaches, and took the lead as the most costly form of malicious breach.
The root cause of compromised credentials varies. In some cases, stolen credentials are also related to human-factored social engineering scams such as phishing or business email compromise attacks. In other cases, your login information may have been stolen in a previous breach of online services you may use. Hackers will often sell that data on the dark web, where bad actors can then use the data to carry out new attacks.
Whatever the cause, the threat is real and costly. According to the report, compromised credentials accounted for 1 out of every 5 — or 19% of — reported malicious data breaches. That makes this form of attack tied with cloud misconfiguration as the most frequent cause of a malicious breach. However, stolen credentials tend to cost far more than any other cause of malicious breach. According to the report, the average cost of a breach caused by compromised credentials is $4.77 million — costing businesses nearly $1 million more than other forms of attack.
Given the frequency of data breaches caused by compromised credentials, individuals and businesses alike need to be paying closer attention to how they store, share, and use their login information. Luckily, there are a number of pretty simple steps anyone can take to protect their credentials. Here are just a few:
There are now a variety of password managers that can vastly improve your password strength and will help stop you from using the same or similar passwords for every account. In my cases, they can be installed as a browser extension and phone app and will automatically save your credentials when creating an account. Not only are password managers an extremely useful security tool, they are an incredible convenient tool for a time when we all have hundreds of different accounts.
Another important and easy to use tool is multi-factor authentication (MFA), in which you are sent a code after logging in to verify your account. So, even if someone stole your login credentials, they still won’t be able to access your account without a code. While best practice would be to use MFA for any account offers the feature, everyone should at the very least use it for accounts that contain personal or sensitive, such as online bank accounts, social media accounts, and email.
Check Past Compromises
In order to ensure your information is protected, it’s important to know if your credentials have ever been exposed in previous data breaches. Luckily, there is a site that can tell you exactly that. Have I Been Pwned is a free service created and run by cybersecurity expert Troy Hunt, who keeps a database of information compromised during breaches. User’s can go on and search the data to see if their email address or previously used passwords have ever been involved in those breaches. You can also sign up to receive notifications if your email is ever involved in a breach in the future.
Cyber Awareness Training
Lastly, in order to keep your credentials secure, it’s important that you don’t get tricked into give them away. Social engineering, phishing, and businesses email compromise schemes are all highly frequent — and often successful — ways bad actors will try to gain access to your information. Scammers will send emails or messages pretending to be from a company or official source, then direct you to a fake website where you are asked to fill out information or login to your account. Preventing these scams from working largely depends on your ability to accurately spot them. And, given the increased sophistication of these scams, using a training program specifically designed to teach you how to spot the fakes is very important.
Last week, IBM and The Ponemon Institute released their annual Cost of a Data Breach Report. For the past 15 years, the report has highlighted recurring and emerging factors that contribute to the cost of data data breaches, as well as the root causes of those breaches. One of the key findings in this year’s report is the fact that human factored cyber attacks not only make up a large percentage of the all malicious attacks, but also are incredibly costly to businesses that suffered breaches. This only confirms the importance of cyber awareness training for employees to limit the risk of a human factored attack.
There are many different causes of a data breach, some of which are merely accidental. However, according to this year’s report, malicious attacks now make up 52% of all breaches. This didn’t used to be the case. In fact, malicious attacks have seen a 24% growth rate in just six years. Malicious attacks are also the most expensive, costing businesses an average of $4.27 million. That’s nearly $1 million more than all other causes of a breach.
Given the frequency and cost of malicious attacks, it’s important to look closer at the different threats that account for the rise in malicious attacks — and the data is surprising. While expected threats such as system vulnerabilities and malicious insiders are certainly present, human factored cyber attacks take up a large chunk of all malicious attacks. Threats ranging from phishing attacks, to business email compromise, to social engineering and cloud misconfigurations are all rooted in human rather than technical vulnerability, and account for 41% of all malicious attacks leading to data breaches. Indeed this report correlates with what was presented in the Verizion 2020 Data Breach Investigations Report.
Human factored cyber attacks aren’t something you can protect yourself against strictly through technically safeguards. Instead protecting against these vulnerability requires working with employees, establish proper quality control protocols, ensuring your have the right expertise on your team and using cyber awareness training to help build safer online habits.
As a Fortune 100 CISO once told me, “at the end of the day, every cyber incident starts with someone making a decision.”
By now it is commonly understood that free online services such as social media, search engines, and emails are not actually that free. Instead, we use those services in exchange for data about who we are and what we want, which can then be used to show us highly targeted advertising or even just sold to third-party companies. Our very identities are now the most valuable object in the world and we give it to tech giants every single day.
That’s why there is a growing movement among some lawmakers to make companies pay consumers for the data they use. Data dividends, as it’s called, is now being pushed by politicians like Andrew Yang and California governor Gavin Newsom who argue that, by ensuring companies are paying users for their data, consumers will be empowered to take more control of their online identity and privacy.
The problem, however, is once you take a closer look at the concept Yang and Government Newsom are pushing, it becomes clear that this project, which is meant to promote privacy, ends up reinforcing a system that commodifies consumer data and disincentives privacy-first practices. We are treading a dangerous path if we attempt to monetize identity.
Here is why:
Paying consumers for their data doesn’t protect their privacy. Instead it ends up justifying the current practice of data mining that undermines the right to privacy. Certain companies are already using similar practices. Amazon, for example, offered $25 Amazon gift cards for full body 3D scans of their users. It’s a dramatic example, but fundamentally equivalent to what lawmakers are now proposing.
The concept of privacy is literally a human right and as such cannot be bought and sold in a free and open society. It’s like saying that companies can take away your right of free expression so long as they compensate you for it. Making money off of and sharing user data with third-parties has already been normalized by tech companies, and data dividends only further validates these practices.
This isn’t to say that distributing the money earned from personal information back to the consumer is necessarily a bad thing, it’s simply an issue entirely separate from privacy. If companies are required to give out data dividends, it would in no way lessen the importance of ensuring the privacy of our identities and data.