Microsoft has stepped up its legal fight against a group that it claims is abusing its Azure cloud services by publicly naming four members of the group it calls the “Azure Abuse Enterprise.” The lawsuit was first filed in December 2024 and alleges that the group obtained Azure OpenAI credentials illegally and used them to create and disseminate explicit deepfake content.
The operation allegedly involved reselling unauthorized access to Microsoft’s AI services and providing tools to manipulate safety guardrails. The four defendants named were Phát Phùng Tấn from Vietnam, Alan Krysiak from the UK, Ricky Yuen from Hong Kong, and Arian Yadegarnia from Iran.
According to Microsoft, the organization, also known as Storm-2139, had an organized network made up of end users who produced forbidden items, suppliers who disseminated them, and developers who developed illegal AI tools.
According to court filings, Microsoft was able to track down some of the defendants through online conversations, including those on 4chan, where individuals unintentionally revealed their own identities. In order to prevent their identities from obstructing any criminal investigations, the tech giant has also named other individuals in the US, UK, Austria, Turkey, and Russia. Microsoft also claims that people in Argentina, Paraguay, and Denmark intentionally used its AI capabilities without permission, even if this did not necessarily go against the company’s rules.
Apart from pursuing legal action, Microsoft has used technological measures to demolish the group’s infrastructure, such as acquiring domains associated with the activity. The business also plans to forward instances to law enforcement organizations in other nations.
As the investigation unfolds, Microsoft’s Digital Crimes Unit continues to monitor online platforms for further evidence, with reports suggesting that some members of the accused group have attempted to shift blame onto others.
Meanwhile, Microsoft remains firm in its stance against AI misuse, pushing for stricter regulations and enhanced security measures to prevent similar abuses in the future.