Microsoft warns that an OpenAI API is being abused as a backdoor for espionage

On Monday, Microsoft Detection and Response Team (DART) researchers warned that an OpenAI API was being abused as a backdoor for malware. The researchers concluded that bad actors were using the novel backdoor to conduct long-term espionage operations.
Specifically, Microsoft’s cybersecurity researchers discovered that cybercriminals were taking advantage of the OpenAI Assistants API, a clever way to hide their illicit activities, according to Bleeping Computer.
“Instead of relying on more traditional methods, the threat actor behind this backdoor abuses OpenAI as a [command-and-control] channel as a way to stealthily communicate and orchestrate malicious activities within the compromised environment. To do this, a component of the backdoor uses the OpenAI Assistants API as a storage or relay mechanism to fetch commands, which the malware then runs,” the researchers wrote in a Microsoft Incident Response published on Nov. 3.
Read on to find out how the exploit worked and how to guard against it.
How bad actors exploited the OpenAI Assistants API
In July, the researchers say they discovered a new backdoor within OpenAI’s Assistants API while investigating a “sophisticated security incident.” They named the backdoor SesameOp. (Cybersecurity researchers often give catchy names to new strains of malware or cybersecurity exploits.)
The Assistants API is a developer tool that lets OpenAI’s enterprise clients build AI assistants within their own apps. Essentially, it brings OpenAI tools like ChatGPT and Code Interpreter into other third-party apps. We should also note that this system is set to be replaced by OpenAI’s Responses API.
The DART researchers found that the covert backdoor enabled threat actors to manage compromised devices undetected, using the Assistants API to piggyback malicious commands and encrypted data. While the incident response is short on specifics, the backdoor allowed the bad actors to harvest data for “espionage-type purposes.” By using the OpenAI API, the cybercriminals were able to mask their activities.
“This threat does not represent a vulnerability or misconfiguration, but rather a way to misuse built-in capabilities of the OpenAI Assistants API,” the researchers concluded.
How to guard against the SesameOp backdoor
Along with an in-depth technical analysis of the threat, Microsoft researchers provided a list of recommendations to mitigate the impact of the exploit.
You can read the full list of recommendations in the Microsoft Incident Response. Some suggestions include “Audit and review firewalls and web server logs frequently,” and “Review and configure your perimeter firewall and proxy settings to limit unauthorized access to services, including connections through non-standard ports.”
Because the OpenAI Assistants API is set to be deprecated next year anyway, developers may also want to go ahead and migrate to the Responses API that replaces it. OpenAI has a migration guide on its website.
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.