Zum Inhalt springen

Security news weekly round-up – 18th July 2025

No system is safe. It’s only a matter of one question: Who has the patience and resolve to crack it?

Just when I think that we have had enough, I read about them. This just tells me that a future where they no longer exist is far away (or impossible) due to our nature as humans. We make mistakes—leading to vulnerabilities—and among us are those with malicious intent who can spend time and money to create malicious code—malware with varying capabilities.

The zero-day that could’ve compromised every Cursor and Windsurf user

Luckily, the good guys discovered the vulnerability. But, how did it happen? Before that, if you’re new to Cursor, you can read about it on UI Bakery. And for Windsurf, head over to DataCamp and read about it.

Now, back to the question that I asked earlier. The following excerpt explains how it all started:

The vulnerability allowed any attacker, not only to gain control over a single extension, but an supply chain armageddon, gaining full control over the entire marketplace. Given this flaw, any attacker could push malicious updates under the trusted @open-vsx account.

Grok-4 Falls to a Jailbreak Two Days After Its Release

Let’s be specific. Researchers used two jailbreaking methods to pwn Grok-4. The methods are Echo Chamber and Crescendo—both are multi-turn jailbreaks and they are different in the way that they work. We can call this (also used in the article) a Hybrid attack that presents a challenge for developers of secure LLMs.

On the success rate of the attack, we have the following from the article:

The researchers tested the combined Echo Chamber and Crescendo jailbreak method against other ‘forbidden’ outputs from Grok-4. It was successful on many occasions.

For Crescendo’s Molotov cocktails it achieved a 67% success rate. For the Crescendo ‘meth’ (methamphetamine synthesis) test, it achieved a 50% success rate.

For the Crescendo ‘toxin’ (toxic substances or chemical weapon synthesis) test, it achieved a 30% success rate.

OpenAI, Google DeepMind and Anthropic sound alarm: ‘We may be losing the ability to understand AI’

On a normal day, these tech companies are rivals, no doubt. Meanwhile, when you see them drop their rivalry for a while to sound a warning, we should listen.

Currently, I mean July 2025, we can see the reasoning of AI models while they work on our assigned tasks. Meaning, you can see what the AI model is „thinking“ before generating an output. Now, researchers are saying that with advancement we can lose this transparency and prevent us from knowing or catching harmful LLM intentions before they become actions.

From the article, we have the following:

The breakthrough centers on recent advances in AI reasoning models like OpenAI’s o1 system. These models work through complex problems by generating internal chains of thought (CoT) — step-by-step reasoning that humans can read and understand.

The transparency could vanish through several pathways. As AI companies scale up training using reinforcement learning — where models get rewarded for correct outputs regardless of their methods — systems may drift away from human-readable reasoning toward more efficient but opaque internal languages.

Hackers exploit a blind spot by hiding malware inside DNS records

If you think that it’s impossible, think again. Now, we can ask: how did they pull this off? The excerpt below offers some answers.

The file was converted from binary format into hexadecimal. The hexadecimal representation was then broken up into hundreds of chunks. Each chunk was stashed inside the DNS record of a different subdomain of the domain whitetreecollective[.]com

An attacker who managed to get a toehold into a protected network could then retrieve each chunk using an innocuous-looking series of DNS requests, reassembling them, and then converting them back into binary format.

The technique allows the malware to be retrieved through traffic that can be hard to closely monitor.

Google finds custom backdoor being installed on SonicWall network devices

The funny thing about this article is that there is one unanswered question: the research team does not know how attackers established reverse shell on the devices.

From the article:

The targeted devices are end of life, meaning they no longer receive regular updates for stability and security. Despite the status, many organizations continue to rely on them.

GTIG (Google Threat Intelligence Group) recommends that all organizations with SMA (SonicWall Secure Mobile Access) appliances perform analysis to determine if they have been compromised

LameHug malware uses AI LLM to craft Windows data-theft commands in real-time

It has begun. Or, wait, should I say: we are starting to see it in real time? I mean, would you have thought of a malware using an LLM like this?

This further shows that despite their uses in aiding human productivity, attackers are also using LLMs for malicious purposes. What’s more, it might get worse in the future due to the fast pace of technology advancement in the AI space. Hopefully, future defenses can keep up.

From the article:

In the observed attacks, LameHug was tasked with executing system reconnaissance and data theft commands, generated dynamically via prompts to the LLM. These AI-generated commands were used by LameHug to collect system information and save it to a text file.

Hackers Use GitHub Repositories to Host Amadey Malware and Data Stealers, Bypassing Filters

It’s weird when you read an article like this. But it’s real. Attackers use a legitimate platform to distribute malware. This offers them some advantage. First, they can exploit the trust that users have in such platforms. Second, they can fly under the radar from tools that scrutinize network traffic.

These two are present in the linked article above. The following is what you can take away from it all:

It’s believed that the GitHub accounts used to stage the payloads are part of a larger MaaS operation that abuses Microsoft’s code hosting platform for malicious purposes.

The findings also follow the discovery of a wide range of social engineering campaigns that are engineered to distribute various malware families.

Credits

Cover photo by Debby Hudson on Unsplash.

That’s it for this week, and I’ll see you next time.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert