Experts Worry Future AI Crimes Will Target Human Vulnerabilities

Experts Worry Future AI Crimes Will Target Human Vulnerabilities

Earlier this month, a study by the University College London identified the top 20 security issues and crimes likely to be carried out with the use of artificial intelligence in the near future. Experts then ranked the list of future AI crimes by the potential risk associated with each crime. While some of the crimes are what you might expect to see in a movie — such as autonomous drone attacks or using driverless cars as a weapon — it turns out 4 out of the 6 crimes that are of highest concern are less glamorous, and instead focused on exploiting human vulnerabilities and bias’.

Here are the top 4 human-factored AI threats:

Deepfakes

The ability for AI to fabricate visual and audio evidence, commonly called deepfakes, is the overall most concerning threat. The study warns that the use of deepfakes will “exploit people’s implicit trust in these media.” The concern is not only related to the use of AI to impersonate public figures, but also the ability to use deepfakes to trick individuals into transferring funds or handing over access to secure systems or sensitive information.

Scalable Spear-Phishing

Other high-risk, human-factored AI threats include scalable spear-phishing attacks. At the moment, phishing emails targeting specific individuals requires time and energy to learn the victims interests and habits. However, AI can expedite this process by rapidly pulling information from social media or impersonating trusted third parties. AI can therefore make spear-phishing more likely to succeed and far easier to deploy on a mass scale.

Mass Blackmail

Similarly, the study warns that AI can be used to harvest a mass information about individuals, identify those most vulnerable to blackmail, then send tailor-crafted threats to each victim. These large-scale blackmail schemes can also use deepfake technology to create fake evidence against those being blackmailed.

Disinformation

Lastly, the study highlights the risk of using AI to author highly convincing disinformation and fake news. Experts warn that AI will be able to learn what type of content will have the highest impact, and generate different versions of one article to be publish by variety of (fake) sources. This tactic can help disinformation spread even faster and make the it seem more believable. Disinformation has already been used to manipulate political events such as elections, and experts fear the scale and believability of AI-generated fake news will only increase the impact disinformation will have in the future.

The results of the study underscore the need to develop systems to identify AI-generated images and communications. However, that might not be enough. According to the study, when it comes to spotting deepfakes, “[c]hanges in citizen behaviour might [ ] be the only effective defence.” With the majority of the highest risk crimes being human-factored threats, focusing on our own ability to understand ourselves and developing behaviors that give us the space to reflect before we react may therefore become to most important tool we have against these threats.

 

 

Auto-Forward Hell

Auto-Forward Hell

We understand the risks of having our email credentials compromised. If it happens, we know to change our login information as quickly as possible to ensure whoever got in can’t continue to access our emails. The problem, however, is that there is a very simple way for hackers to continue to access the content of your inbox even after you change your password: auto-forwarding. If someone gains access to your email, they can quickly change your configurations to have every single email sent to your inbox forwarded to the hacker’s personal account as well.

The most immediate concern with unauthorized auto-forwarding is the ability for a hacker to view and steal any sensitive or proprietary information sent to your inbox. However the risks associated with this form of attack have far greater ramifications. By setting up auto-forwarding, phishers can carry out reconnaissance efforts in order to carry out more sophisticated social engineering scams in the long-term.

For example, auto-forwarding can help hackers carry out spear phishing attacks — a form of phishing where the scammer tailors phishing emails to target specific individuals. By learning how the target communicates with others and what type of email they are most likely to respond to, hackers can create far more convincing phish and increase the chance that their attack will be a success.

Bad actors can also utilize auto-forwarding to craft highly-sophisticated business email compromise (BEC) attacks. BEC is a form of social engineering in which a scammer impersonates vendors or bosses in order to trick employees into transfering funds to the wrong place. If the scammer is using auto-forward, they may be able to see specific details about projects or services being carried out and gain a better sense of the formatting, tone, and style of invoices or transfer requests This can then be used to create fake invoices for actual services that require payment.

How to protect yourself from unauthorized auto-forwarding

There are, however, a number of steps you and your organizations can take to prevent hackers from setting up auto-forwarding. The most obvious is to prevent access to your email account in the first place. Multi-factor authentication, for example, places an extra line of defense between the hacker and your inbox. However, every organization should also disable or limit users’ ability to set up auto-forwarding. Some email providers allow organizations to block auto-forward by default. Your IT or security team can then manually enable auto-forwarding for specific employee’s when requested for legitimate reasons and for a defined time period.

When it comes to the risks with auto-fowarding, the point is that the more the hackers can learn about your organizations and your employees, the more convincing their future phishing and BEC attacks will be. By putting safeguards in place that help prevent access to email accounts and block auto-forwarding, you can lower the risk that a bad actor will gain information about your organization and carry out sophisticated social engineering attacks.

Cybersecurity Behaviors Made Easy

Cybersecurity Behaviors Made Easy

We’ve talked about the human factors of cybersecurity and the importance of exposing employees to social engineering scams and other attacks that exploit human vulnerabilities.  However, when we look at how to improve our organization’s digital practices, we have to do more than train and simulate phish. We have to take a look at what we are asking staff to do and make sure that the cybersecurity behaviors we want them to do are easy, not difficult. Otherwise, those behaviors will become hard to sustain in the long term.

When you’re trying to create new behaviors — for yourself or for your employees — it’s essential to remember that motivation is not a constant. You might be energized and excited to spot phishing emails when you first learn about it, but overtime that could fade. You might get stressed about other parts of your job, or you might be distracted by friends and family, and overtime your interest in your new habit may start to fade. But that is okay! According to behavior scientist BJ Fogg, instead of trying to keep yourself motivated, focus on creating behaviors that are so easy you can do them without worrying about motivation at all.

So, when it comes to fostering cybersecurity behaviors in your employees, it’s essential to keep things short and easy to do. And the truth is, there are a number of super easy cybersecurity behaviors that will help keep you, your employees, and your businesses from being vulnerable to cyber threats. Here are just a few:

Automated Security Scanning

One example of a simple behavior for your software security is to run applications through an automated security scanning tool. Automation is becoming more and more helpful for relieving some of the burden off your IT and security staff. Now, many scanning tools can be set to run automatically, and will highlight potential vulnerabilities with your applications, systems, and even websites. This will leave your security team to evaluate and patch vulnerabilities, instead of wade through your entire system looking for any holes.

Single Sign-On

Another important and easy cybersecurity tool is single sign-on (SSO). Essentially, SSO allows employees to use one set of credentials to access a variety of separate services and applications. While it may seem safer to have different credentials for every applications, single sign-on can actually create stronger authentication processes across the enterprise. As companies began to rely on more and more services, each requiring different credentials, it became hard for employees to keep track of all their log in information, leading to worse password hygiene. By combining all credentials into one, it is easier for employees to use smart and secure credentials.

Phish Reporting

One other easy cybersecurity behavior you can implement is a phish reporting button within your email provider. It’s essential that your IT department is aware of any phishing emails being sent around the office, and in many cases it’s up to the employees to report any phish they receive. While simply forwarding an email to your IT help desk might not seem like a lot, using a simple button to report potential phish is that much easier. Implementing a feature as simple as a report button can increase your reporting and help your IT department keep the network safe.

There are plenty of additional cybersecurity behaviors that you can make easy. All you have to do is first look at what people do, find out what is making the behaviors you want to see difficult to accomplish, then work to make them easier.

Vishing Scam Led to Twitter Hack

Vishing Scam Led to Twitter Hack

By now, most people have heard about the hack of high-profile Twitter accounts that took place on July 15th. To carry out the attack, the perpetrators used a social engineering tactic called  “vishing” — short for voice phishing — in which attackers use phone calls rather than email or messages to trick individuals into giving out sensitive information. The incident once again highlights the risks associated with human rather than technical vulnerabilities, and shows Twitter’s shortcomings in managing employee access controls.

On the day of the attack, big names like President Barack Obama, Elon Musk, Jeff Bezos , and Joe Biden all tweeted a message asking users to send them bitcoin with the promise of being sent back double the amount. Of course, this turned out to be a scam and the tweets were quickly removed, but not before the hacker received over $100,000 worth of bitcoin.

According to a statement by Twitter, the attackers gained access to the company’s internal systems the same day as the attack. By using “a phone spear phishing attack,” — commonly known as vishing — the scammers tricked lower-level employees into revealing  credentials that allowed them access to Twitter’s internal system. This access, however, did not allow the attackers to immediately access user accounts. However, once inside they were able to carry out additional attacks on other employees who did have access to Twitter’s support controls. From there, the hackers had access to every account on Twitter and could make important changes, including changing the email address associated with an account.

While vishing is not the most well known or most frequent form of social engineering attack, the Twitter hack shows just how dangerous it can be. It’s the one type of attack that requires no code, email, or usb device to carry out. However, there are key protections businesses can use, and that should have been in place at Twitter. First among them is to have explicit policies and safeguards for disclosing credentials and wiring funds. Individual employees should not be allowed to give out information on their own — even if they think they are giving it to a trusted colleague. Instead, employees should have to communicate with a third-party within the company who can verify an employee’s identity before sharing credentials.

Secondly, Twitter needed to have stricter access controls in place, throughout all levels of the company. While Twitter claims that “access to [internal] tools is strictly limited and is only granted for valid business reasons,” this was clearly not the case on July 15th. And even though the employees that were initially exploited did not have full access to user accounts, the hackers were able to leverage the limited access they had to then gain even more advanced and detailed permission rights. Businesses should therefore ensure all employees, even with limited access, have the proper cyber awareness training and  undergo simulations of various social engineering attacks.

Lastly, when it comes to vishing, it’s important to use techniques similar to those used to spot other types of scams. When getting a call, the first thing to do is simply take a breathe. This will interrupt automatic thinking and allow you to be more alert. You also need to make sure you are actually talking to who you think you are. Scammer’s can make a call look like it’s coming from a trusted number, so even if you get a call from someone in your contacts it could still be a scammer. That’s why it’s important to focus on what the phone call or voicemail is trying to convey. Is it too urgent? Are they probing for sensitive or personal information about you or others? Is it relevant to what you already know? If anything at all seems off, be extra cautious before talking about that could be damaging.

While you may feel comfortable spotting a phishing attack, hackers and scammers are constantly looking for new ways to trick us. And, as the Twitter hack shows, they are very good at what they do. It’s better to be too cautious and assume you are at risk of being scammed, then think it could never happen to you. Because it can.

An Incident Response Plan Will Save You Money

An Incident Response Plan Will Save You Money

Last week, we reviewed some of the highlights of IBM’s 2020 Cost of a Data Breach Report, and saw how both human-factored cyber attacks and compromised credentials are increasingly frequently and can cost businesses  upwards of $4 million. But now we’ll finish out on a positive note by emphasizing a key driver in reducing the cost of a data breach: incident response.

If your business ever experiences a data breach, you don’t want to be caught without a plan. Being about to identify and put a stop to the attack quickly will not only stop more information from being stolen, but will also dramatically reduce the cost of the breach. Last year, the IBM report showed that businesses with an n incident response team and which tested their response saved an average of $1.23 million on the cost of the breach. This year,  that number jumped up all the to $2 million saved on a breach. Given the increased cost reduction of responding quickly, there is no reason why business shouldn’t have an incident response team in place.

However, having an incident response team in place is only one piece of the puzzle. It’s also important that your team, alongside with business leadership test your plan by simulating different cyber attacks that your business is vulnerable to. According to the report, incident response testing is the single biggest factor in limiting cost of a breach. Just testing your response shaves off an average of nearly $300,000 from the cost of a breach.

When it comes to forming a response team and testing, it essential that your team includes more than just staff from the IT department. A data breach also requires the oversight of businesses leadership and legal to ensure the response is aligned with regulatory requirements such as disclosure. Of course, the having technical experts is also important to help limit further access, exfiltration and damage to your systems.

Of course, with everything today, COVID-19 has made the job of your response team even harder. While the report doesn’t have data on the exact impact COVID has had on response time, it does show that 76% of businesses expect that working remotely will increase the time it takes to respond to a breach, which, of course, will also increase the cost of the breach. It’s therefore essential that your team tests how your response differs when everyone is working remotely, then discuss possible changes to the response plan should a breach happen while everyone is working from home.