News Categories

AI bots are helping scammers craft messages without broken english

By Team HardwareZone - on 13 Mar 2023, 8:10pm

AI bots are helping scammers craft messages without broken english

(Image source: Unsplash)

Note: This article was written by Osmond Chia and first appeared in The Straits Times on 12 March 2023. We've republished a truncated version of the story. 

Bad grammar has long been a telltale sign that a message or a job offer is likely to be a scam. But cyber-security experts are now saying that those days may be over as generative artificial intelligence (AI) chatbots like ChatGPT have helped scammers craft messages in near-perfect language. Cyber-security experts are saying that they have observed improvements in the language used in phishing scams in recent months – coinciding with the rise of ChatGPT – and warn that end users will need to be even more vigilant for other signs of a scam.

Risks associated with ChatGPT were categorised as an emerging threat in a special-issue report in March by security software firm Norton, which said scammers will tap large language models like ChatGPT. While not new, the tools are more accessible than before, and can be used to create deepfake content, launch phishing campaigns and code malware, wrote Norton.

British cyber-security firm Darktrace reported in March that since ChatGPT’s release, e-mail attacks have contained messages with greater linguistic complexity, suggesting cyber criminals may be directing their focus to crafting more sophisticated social engineering scams.

ChatGPT is able to correct imperfect English and rewrite blocks of texts for specific audiences – in a corporate tone, or for children. It now powers the revamped Microsoft Bing (with versions for mobile), which crossed 100 million active users in March and is set to challenge Google in a fight for the search-engine pie.

Mr Matt Oostveen, regional vice-president and chief technology officer of data management firm Pure Storage, has noticed the text used in phishing scams becoming better written in the past six months, as cases of cyber attacks handled by his firm rose. He is unable to quantify the number of cases believed to be aided by AI as it is still early days.

He said: "ChatGPT has a polite, bedside manner to the way it writes... It was immediately apparent that there was a change in the language used in phishing scams." He added: "It’s still rather recent, but in the last six months, we've seen more sophisticated attempts start to surface, and it is probable that fraudsters are using these tools."

The polite and calm tone of chatbots like ChatGPT comes across as similar to how corporations might craft their messages, said Mr Oostveen. This could trick people who previously caught on to scams that featured poor and often aggressive language, he added.

In December, Check Point Software Technologies found evidence of cyber criminals on the Dark Web using the chatbot to create a python script that could be used in a malware attack.

Seven in 10 Singapore organisations reported an attempted e-mail attack in 2022, based on respondents surveyed by Proofpoint. Just over half of local firms with a security awareness programme train their entire workforce on scams, and even fewer conduct phishing simulations to prepare staff for potential attacks, said Proofpoint. Familiar logos and branding were enough to convince four in 10 respondents that an e-mail was safe.

But there are still ways people can guard against AI-generated scams – by looking out for suspicious attachments, headers, senders and URLs embedded within an e-mail, it added in a separate report. Experts urge the public to verify whether a sender’s e-mail is authentic and encourage firms to conduct ransomware drills and work with consultants to plug gaps in their systems.

Read Next: Can you spot a scam? Find out how well you know the different varieties by taking the test by The Straits Times!

Join HWZ's Telegram channel here and catch all the latest tech news!
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.