CybersecurityNews

Email security firms see over 10X surge in email phishing attacks amid ChatGPT’s emergence

1 Mins read
email phishing

Stocklytics.com analysis reveals a staggering over 10-fold increase in email phishing attacks following the introduction of ChatGPT. Certain companies have experienced a surge of up to 1265% in such attacks. In addition to crafting AI tools like WormGPT, Dark Bart, and FraudGPT, which generate malware proliferating on the dark web, cybercriminals are actively seeking avenues to exploit OpenAI’s flagship AI chatbot.

Stocklytics financial analyst Edith Reads shared insights on the data saying, “Threat actors are using tools like ChatGPT to orchestrate schemes involving targeted email fraud and phishing attempts. These attacks often entice victims to click on deceitful links and disclose information, like usernames and passwords.”

During the last quarter of 2022, phishing attacks surged, with cybercriminals sending out approximately 31,000 fraudulent emails daily. This surge represented a 967% increase in credential phishing attempts.

70% of phishing attacks were executed using text-based business email compromises (BEC), and 39% of mobile-targeted attacks involved SMS phishing (smishing). Perpetrators utilized tools such as ChatGPT to craft deceptive messages aimed at tricking individuals into disclosing sensitive information.

Cybercriminals commonly employ deceptive emails, texts, or social media messages that seem authentic in their phishing attempts. These methods dupe victims into accessing websites where they unwittingly authorize transactions from their accounts, leading to financial losses.

ChatGPT drives email phishing attacks

In response to the growing threat of cybercriminals utilizing generative AI, Edith advocates for proactive use of AI technologies by cybersecurity experts. Despite efforts by developers like OpenAI, Anthropic, and Midjourney to safeguard against misuse, skilled individuals persist in finding ways to bypass protective measures. Recent reports, including the one from the RAND Corporation, highlights the potential misuse of generative AI chatbots by terrorists for learning about biological attacks. Researchers have demonstrated vulnerabilities, such as manipulating ChatGPT through less commonly tested languages for criminal instructions.

In addressing these issues, OpenAI has enlisted cybersecurity experts, or Red Teams, to identify security weaknesses in its AI systems.

Source

Read next: When hackers strike: Exposing the painful truth about yielding to ransomware

Leave a Reply

Your email address will not be published. Required fields are marked *

sixty three ÷ twenty one =