Experts Worry Future AI Crimes Will Target Human Vulnerabilities

Experts Worry Future AI Crimes Will Target Human Vulnerabilities

Earlier this month, a study by the University College London identified the top 20 security issues and crimes likely to be carried out with the use of artificial intelligence in the near future. Experts then ranked the list of future AI crimes by the potential risk associated with each crime. While some of the crimes are what you might expect to see in a movie — such as autonomous drone attacks or using driverless cars as a weapon — it turns out 4 out of the 6 crimes that are of highest concern are less glamorous, and instead focused on exploiting human vulnerabilities and bias’.

Here are the top 4 human-factored AI threats:

Deepfakes

The ability for AI to fabricate visual and audio evidence, commonly called deepfakes, is the overall most concerning threat. The study warns that the use of deepfakes will “exploit people’s implicit trust in these media.” The concern is not only related to the use of AI to impersonate public figures, but also the ability to use deepfakes to trick individuals into transferring funds or handing over access to secure systems or sensitive information.

Scalable Spear-Phishing

Other high-risk, human-factored AI threats include scalable spear-phishing attacks. At the moment, phishing emails targeting specific individuals requires time and energy to learn the victims interests and habits. However, AI can expedite this process by rapidly pulling information from social media or impersonating trusted third parties. AI can therefore make spear-phishing more likely to succeed and far easier to deploy on a mass scale.

Mass Blackmail

Similarly, the study warns that AI can be used to harvest a mass information about individuals, identify those most vulnerable to blackmail, then send tailor-crafted threats to each victim. These large-scale blackmail schemes can also use deepfake technology to create fake evidence against those being blackmailed.

Disinformation

Lastly, the study highlights the risk of using AI to author highly convincing disinformation and fake news. Experts warn that AI will be able to learn what type of content will have the highest impact, and generate different versions of one article to be publish by variety of (fake) sources. This tactic can help disinformation spread even faster and make the it seem more believable. Disinformation has already been used to manipulate political events such as elections, and experts fear the scale and believability of AI-generated fake news will only increase the impact disinformation will have in the future.

The results of the study underscore the need to develop systems to identify AI-generated images and communications. However, that might not be enough. According to the study, when it comes to spotting deepfakes, “[c]hanges in citizen behaviour might [ ] be the only effective defence.” With the majority of the highest risk crimes being human-factored threats, focusing on our own ability to understand ourselves and developing behaviors that give us the space to reflect before we react may therefore become to most important tool we have against these threats.