Microsoft has filed a lawsuit against a group it alleges intentionally created and utilized tools to bypass the security features of its cloud AI products.
As per a complaint submitted by the company in December at the U.S. District Court for the Eastern District of Virginia, a group of 10 unnamed defendants allegedly utilized stolen customer credentials and custom software to breach the Azure OpenAI Service, a fully managed service by Microsoft powered by ChatGPT maker OpenAI.
Microsoft accuses the defendants, known as “Does” in legal terms, of violating various laws by gaining unauthorized access to Microsoft’s systems to produce offensive and illicit content without specific details on the content. The company seeks legal action and damages in the complaint.
The complaint reveals that Microsoft uncovered the misuse of Azure OpenAI Service credentials in July 2024, with stolen API keys from customers generating content against the service’s acceptable use policy. Microsoft suspects a systematic theft of API keys to utilize U.S.-based customers’ keys in a hacking scheme. The defendants allegedly created de3u, a tool to leverage these keys and generate content using DALL-E without restrictions on content prompts.
The de3u code repository on GitHub is currently inaccessible, indicating Microsoft’s efforts to combat the alleged misuse with added security measures to counter such activity.
Microsoft’s actions involve seizing a website crucial to the defendants’ operations, enabling the company to gather evidence, understand monetization of services, and disrupt any additional technical infrastructure. They have implemented undisclosed countermeasures and enhanced safety measures on the Azure OpenAI Service to address the observed activity.