Stocks

North Korea-linked hackers use AI to forge South Korean military ID in phishing attack

Pinterest LinkedIn Tumblr

A suspected North Korean hacking group has been found using ChatGPT to generate a forged South Korean military identification document as part of a phishing campaign, according to a Bloomberg report citing research by Genians, a South Korean cybersecurity company.

Instead of embedding a real image, attackers linked the fake ID card to malware designed to extract sensitive information from devices.

The incident highlights how North Korean operatives are increasingly deploying artificial intelligence tools to advance cyber-espionage, with targets ranging from journalists and human rights activists to researchers focused on North Korea.

Hackers deploy fake military ID in South Korea

The group involved in the latest attack has been identified as Kimsuky, a suspected North Korean state-sponsored espionage unit.

Researchers said the hackers crafted a draft version of a South Korean military identification card using ChatGPT, making their phishing email appear more credible.

The email, sent from an address ending in .mli.kr—closely resembling an official South Korean military domain—was designed to trick recipients into opening the attachment.

Once clicked, the file deployed malware capable of extracting data.

The targets included South Korean journalists, human rights activists, and researchers studying North Korea.

Exactly how many individuals were compromised remains unclear.

Kimsuky’s history of espionage and AI use

Kimsuky has previously been linked to spying efforts against South Korean and international targets.

In a 2020 advisory, the US Department of Homeland Security stated that the group “is most likely tasked by the North Korean regime with a global intelligence-gathering mission.”

The Genians report is the latest to show suspected North Korean hackers using artificial intelligence as part of their operations.

In August, Anthropic reported that North Korean hackers used Claude Code, another AI tool, to secure remote jobs at US Fortune 500 companies.

The AI chatbot helped operatives build convincing fake identities, pass technical assessments, and deliver coding tasks once hired.

Earlier this year, OpenAI said it had banned accounts linked to North Korea that were using its services to create fraudulent résumés, cover letters, and social media content as part of recruitment attempts.

Investigators test AI restrictions

Genians researchers confirmed that ChatGPT initially rejected attempts to generate a government-issued ID, as the reproduction of such documents is illegal in South Korea.

However, by altering the prompt, the restrictions were bypassed, and the hackers were able to create a fake draft image.

The use of AI in these cyberattacks shows how quickly generative models can be adapted for malicious purposes.

Researchers warn that attackers are using AI not just to create convincing images, but also for malware development, attack scenario planning, and impersonation of recruiters.

Cyberattacks tied to North Korean funding efforts

American officials have long alleged that North Korea employs cyberattacks, cryptocurrency theft, and disguised IT contracts to gather intelligence and generate revenue.

These operations, according to US government assessments, are designed to evade sanctions and finance Pyongyang’s nuclear weapons programme.

The phishing attempt against South Korean targets is another example of how AI is being integrated into such operations.

While the attack used a fake military ID as bait, the broader goal remained consistent with previous North Korean tactics: extracting data and extending cyber-espionage capabilities.

The post North Korea-linked hackers use AI to forge South Korean military ID in phishing attack appeared first on Invezz