Comment ChatGPT améliore la productivité… des cybercriminels

[ad_1]

The democratization of artificial intelligence tools and self-service phishing kits have largely contributed to the growth of phishing. Cybercriminals can therefore move more quickly to launch phishing campaigns. But the most worrying thing is that ChatGPT allows them to create malicious content more and more effectively.

Good soups are made in old pots. This also applies to acts of piracy. Year after year, phishing remains one of the most widespread techniques for stealing private or sensitive information.

According to cybersecurity company Zscaler, a mid-sized business receives dozens of phishing emails every day. And many employees fall into the traps of cybercriminals : one in three users clicks on the malicious content of phishing emails, according to SoSafe, the European leader in employee awareness and training.

But the situation could get worse with the massive use of ChatGPT to create malicious emails. As part of a recent study, IBM X-Force (an IBM team which brings together more than 200 hackers around the world) used the OpenAI solution, ChatGPT.

Its bot created a convincing email in just a few minutes (compared to 16 hours by IBM’s team of experts) from five simple prompts, which was almost as attractive as a message written by a human. When asked about the best method to ensnare the 800 employees of a global healthcare company, ChatGPT answered with trust, authority and social proof, as well as personalization, optimization and appeal. ‘action. The AI ​​therefore recommended usurping the email address and identity of the human resources manager.

Save time for pirates!

To make their malicious email more credible, the IBM team gathered important information from sites such as LinkedIn, the targeted company’s blog and Glassdoor reviews. On the blog, she notably discovered an article announcing the recent launch of a well-being program for employees and HR.

The trap was ready. It materialized in the form of an email including an employee survey with “five brief questions” that would take only “a few minutes” and was to be returned by “this Friday.”

Finally, the subject of the message generated by the human team was concise (“Employee well-being survey”) while the subject from IA was longer (“Unlock Your Future: Limited Advancements at Company X”), which probably aroused suspicion from the start.

So many psychological incentives to encourage employees to respond quickly to this email. As a result, the phishing email created by the team proved to be more effective, but only just. The click-through rate for the human-generated email was 14%, compared to 11% for the AI.

“As AI evolves, we will continue to see it mimic human behavior more accurately, which could lead to AI one day eventually beating humans”warned IBM.

Most importantly, this study revealed that generative AI accelerated hackers’ ability to create convincing phishing emails. The time thus saved allows them to turn to other malicious objectives or to refine their preliminary investigation to be even more effective.

[ad_2]

Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top