Note: This feature was first published on 24 June 2024.
Are we ready, at home and at work, to deal with the threats stemming from generative AI? Image source: Pexels.
We’ve spoken about the rise of Machine Learning (ML) and generative AI, and the opportunities that they bring for users. But we are now becoming aware of the threats that they can bring as well.
In a recent study conducted by Trend Micro, nearly 72% of respondents expressed excitement about AI’s potential to enhance task quality and uncover new opportunities, but almost 65% said they worry about AI’s role in spreading misinformation while 58% are concerned about AI’s misuse of their images and likeness.
This is reflected in some of the possible ways that AI can be used in a cyberattack:
- Generative AI can be used to make more convincing emails for phishing attacks or messages that closely resemble a legitimate source.
- Cyberattackers can create highly convincing and personalised messages that can bypass security filters and gain access to sensitive information with AI created social engineering attacks.
- Deepfake technology can be used to generate realistic video or audio content that impersonates a person for blackmail or a trusted individual for use in a spear-phishing attack.
- The creation of multiple fake social media accounts to launch a smear campaign against an individual or business.
Coming to grips with the threat of AI
Eric Shulze, Vice President, Product Management at Trend Micro. Image source: Trend Micro.
With these threats in mind, we spoke to Eric Shulze, Vice President, Product Management at security company Trend Micro, about how the company plans to leverage AI to protect users and businesses from AI abuses and threats.
Generative AI is leading to the rise of more security threats. Image source: Pexels.
What are some threats posed by AI? How has generative AI changed cyberattacks and security
The threat landscape associated with AI is remarkably diverse and widespread. Based on Trend Micro’s research, a few of the major impacts brought to cybersecurity by generative AI involve:
- increasing criminals’ productivity,
- more sophisticated social engineering, and
- expansion to new markets.
Generative AI, much like how it has massively overhauled the productivity of the public, can also overhaul the productivity of the criminal world. For example, AI can provide easier access to information required to perform a cyberattack, improve the training of new wannabe hackers, and generate better and quicker social engineering scripts. It is not just about brand-new threats because of AI, but also how AI is being used to execute, accelerate, or increase effectiveness of existing activities.
On the topic of social engineering, generative AI is opening new opportunities to evolve the usual social engineering practices. As early as 2019, we have seen here at Trend, examples of scams leveraging audio deepfake to effectively trick someone to believe their CEO requested a wire transfer, as well as virtual kidnapping scams, whereby someone receives a phone call saying a family member is being held captive and a ransom needs to be paid. And now there is a growing use of deepfake video and images. But it just does not stop there, we must consider the potential of Large Language Model (LLM) to offer criminals the ability to code fake websites in less time, maliciously manipulate search engine results, or spread misinformation at a larger scale.
Last but not least, a point often overlooked, but in its simplicity will probably be the most impactful, is translations. Thanks to LLM, criminals now have the means to tap into “markets” that were previously inaccessible due to the language barrier. Think of your usual Nigerian prince scammer, and how easy it was to spot due to its messages written in broken English. Now, Nigerian prince scammers can write at scale messages in perfect English impossible for you to detect. With AI, Criminals from non-English-speaking countries can efficiently attack victims from the western world, and vice versa. Additionally, criminals can communicate and collaborate with each other more efficiently using generative AI.
When it comes to security, we are looking at AI from a couple of different perspectives. First, how we can use the power of AI to deliver new and/or enhance our cybersecurity solutions and capabilities. Secondly, how we can protect our customers against AI threats – whether that is safeguarding our customers against scams and attacks that are generated by, or executed with, AI, as discussed above. Or, protecting our customers as they use AI applications, which can be targets for tampering and abuse and could put them at risk as a result.
Additionally, in the case of AI PC, which will be powered heavily by generative AI, they are different from the PCs we have today because they are able to run AI applications locally on the devices versus in the cloud. Today, when people use AI applications, they are using them in the cloud because until now, PCs were not powerful enough to run AI applications in and of themselves. This is made possible by the chip called the MPU (neural processing unit.)
AI applications are vulnerable in ways that other applications are not. They can be at risk of harms such as prompt injections (where someone can cause the application to “misbehave” simply by the instructions they give it”) or model tampering (where a criminal gets access to the AI application itself and changes its behaviour from it was originally designed to do). If an AI application has been tampered with in some way, it can be directed by a malicious actor to do things such as steal sensitive information you may be storing on your PC.
The risks that come with using AI applications on your local device are very different from the traditional “malware” risks such as viruses or phishing or ransomware that traditional antivirus solutions protect. If you choose to buy an AI PC and only use traditional antivirus protection on it, you are not 100% protected as traditional A/V is not designed to protect you from the risks of AI applications running locally on your device.
How do we make sense of all the IT noise we come into contact with daily? Image source: Pexels.
How can AI help boost PC security? Is it really AI or just ML?
AI and machine learning (ML) in this case have two distinct roles in helping boost cybersecurity. ML has been used for years as a technology able to improve on every activity of filtering, detection, and classification. It is foundational to detection engines and drives the efficacy of cybersecurity solutions.
AI, specifically generative AI, is having a huge role in increasing productivity and helping operators deal more efficiently with all forms of unstructured information, such as threat reports, vulnerability reports, incident response reports, malware disclosure reports, machine logs, and notifications. Generative AI can and will help sifting through vast amounts of information and transforming it to something more digestible and more actionable to the user.
Whereas generative AI is having a huge role in increasing the level of insight and productivity for the users of cybersecurity solutions – whether on a PC or otherwise. Generative AI helps sift through huge amount of information and transforms it to something more digestible and more actionable to the user. From a consumer perspective, that can be applied in many ways - from receiving an easy-to-understand summary of an app or website’s data collection and privacy policy before deciding to accept it, to being alerted to a scam with an explanation and recommendation of what to do, to being able to ask questions and get support through an AI chatbot.
Is your security smart enough to deal with false positives? Image source: Pexels.
How do we identify and deal with false positives? What about any possible bias?
We can manage false positives by choosing the right model for the right task and encouraging transparency and explanation of AI and ML models. In the AI domain, a distinction can be drawn between solutions that act as black boxes (i.e. neural networks in their usual implementation) and solutions that offer explainable results. Explain ability is not always necessary, but it is critical in cybersecurity cases. For instance, if we classify something as malicious, identify scams, or decide to block traffic, we need to know why that decision was taken. We have to be able to improve and provide customers an explanation on why a potential course of action was taken.
Prioritising explainable models, together with curating data literacy (really understanding the data), will help identify biases, and enable tuning against false positives.
Is 2FA still enough? Image source: Pixabay.
How is AI changing our approach to 2FA and captcha?
While Two-Factor Authentication (2FA) seems to remain untouched by threats linked to AI for now, we have seen generative image-to-text models like GPT-4 vision or GPT-4o being used to actively break captcha.
In this regard, we believe we will see increasingly of a cat-and-mouse game between newer, more sophisticated captcha services and newer more capable AI models that will be capable of breaking them, leading to a general need of overhauling the captcha system altogether.
Can you deal with the rise of deep fakes?
Should our concerns over privacy change in the face of generative AI and AI-based cyberthreats? Do compliance concerns change?
At a macro-level, the main concern over data privacy and compliance stems from the fact that legislation is still underway around the world, while the need for, and use of, data is continually growing due to the necessity of training new AI models.
The European Union for example, has an AI act that will be fully implemented by 2025, while other countries might be lagging. This can create a situation of temporary confusion, especially for companies aiming at using AI on the global market.
At an individual level, privacy concerns with using an AI application can vary depending on the specific application and how it handles user data. Users need to be aware of what data – sensitive, personal data in particular – is being collected, stored, and used, so they can make informed decisions as to what they use and what they consent to. As the prominence of AI applications grow, so does the potential for exploitation.
The introduction of AI PCs is providing new data privacy benefits for consumers. AI PC refers to PCs equipped with high-performing processors, neural processing units (NPUs) in particular, that enable applications to leverage AI capabilities for various tasks directly on the PC. Because data does not have to be sent to the cloud for processing, that means that analysis and tasks can be done very efficiently locally, and very importantly it means added data privacy to users as they do not have to permit their personal, sensitive data going to the cloud.
Does AI security have solutions in common? i.e. does it all look like sandboxes or honeypots?
While there is a broad range of applications of AI within security solutions, the biggest common denominator is certainly data – optimising what is collected, how it is organised, how it is analysed, how it is used and more.
Regardless of what AI or ML model is chosen, data quality is key for its successful performance. Therefore, data curation, data literacy, and data policies are all key elements common to every AI solution in cybersecurity.
Does using AI for consumer security increase CPU usage and battery life?
Yes, but this is also the reason companies like Intel, AMD, Google, and Qualcomm are all working on CPU – and now NPU – architectures that are optimised for AI-related computations, namely fast and efficient factor multiplications, which can also help some with battery life.
Should we be investing in cyber-insurance? Image source: Pexels.
How can a person best determine his new risk appetite and tolerance in the face of AI-based cyberthreats?
To determine risk tolerance, a person needs to understand the type and level of risk their activities represent in the first place. This is why we are focused on proactive insight, explanation, and alerting so the consumer becomes more educated where threats exist and what measures they can take to reduce their exposure. And of course, cybersecurity solutions need to stay ahead of the evolving AI threat landscape to protect the customer regardless of where their risk tolerance may lie.
What about a corporate environment? How do things change for corporate IT? What are the five steps an IT manager should do immediately?
I focus on our consumer business, but having come from the enterprise side working with many corporate IT departments, this would be my recommendation:
- Define or understand the AI strategy for the company. AI comes with lots of benefits and dangers as we evolve with it. It is critical to evaluate who, and under what circumstances can employees enable AI technology.
- Raise awareness in the company through required training sessions and educational resources. Based on a recent Trend Micro survey, while many people have heard of AI, they are not as familiar to the capabilities (and dangers) that come with generative AI.
- Hire or elect a Chief AI Officer (CAIO) to oversee all AI-related applications and implement coherent policies around AI in the company. This person should work very closely with the Chief Information Security Officer (CISO) and Chief Information Officer (CIO) because their area of expertise go hand in hand in data sovereignty.
- Develop an AI contingency plan and implement zero trust processes in the company to protect critical assets. Requests/orders within the company should always be double checked and validated before further procedure – no matter if the request came directly from the executives.
- Diligent monitoring of AI-related resources and AI reputations, both internal and external.
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.