Microsoft AI Employee Accidentally Exposes 38TB of Private Data

Unveiling the Microsoft AI Mishap

Corporate secrets, passwords, and over 30,000 internal Microsoft Teams messages lost their cloak of invisibility when Microsoft AI researchers misconfigured a URL included in open-source AI training models. The issue came to light during routine internet scans by Wiz, a cloud data security startup founded by ex-Microsoft software engineers.

Overly-permissive Token Gives Control to Potential Attackers

The mishap occurred in June in a GitHub repository under the Microsoft organization named robust-models-transfer, which provides open-source code and AI models for image recognition. Wiz reports that Microsoft had used an Azure feature called Shared Access Signature (SAS) token that allows data sharing from Azure Storage accounts.

Although the access level can be limited to specific files, Wiz discovered that, unfortunately, the link was configured to share the entire storage account, including another 38TB of private files. Additionally, the misconfigured token allowed “full control” permissions instead of read-only, giving attackers the power to delete and overwrite existing files.

“An attacker could have injected malicious code into all the AI models in this storage account, and every user who trusts Microsoft’s GitHub repository would’ve been infected by it,” Wiz reports.

AI Lessons Learned

Within two days of the initial disclosure, Microsoft investigated and invalidated the SAS token. The company acknowledged that data in this storage account included backups of two former employees’ workstation profiles and internal Microsoft Teams messages of these two employees with their colleagues. According to Microsoft, there was no security issue or vulnerability within Azure Storage or the SAS token feature, and no customer data was exposed, so customer action was not required.

Microsoft is making ongoing improvements to harden the SAS token feature further and continues to evaluate the service to bolster its secure-by-default posture. On Monday, the company shared its learnings and best practices in a blog “to inform our customers and help them avoid similar incidents in the future.”

Harnessing the Power of AI

This recent event demonstrates how organizations are dealing with new risks and fresh challenges as they leverage AI’s power. Extra security checks and safety measures must be in place for the massive amounts of data that data scientists and engineers handle as they race to produce new AI solutions.

While the Microsoft AI issue made headlines, PrivaPlan CIO Ron Bebus asks what many of us are wondering. “How many other companies using AI are not so forthcoming with their lapses in security? AI can be a powerful tool in any organization if used correctly,” he says. “PrivaPlan has over 20 years of experience helping organizations understand the risks in their environments, and we are continuing to help our customers understand the risks that AI brings.”

The security experts at PrivaPlan are staying on top of everything AI, ready to help companies understand and mitigate AI risks and benefit from its use. We’re here for you. Contact us today!

Related Posts

Access PrivaPlan Toolkit

Access CMA-PrivaPlan Toolkit

Sign up for updates

Sign up. Learn about Compliance

Subscribe now for up-to-date information about privacy & security compliance! You’ll receive emails regarding news about compliance & alerts for new blog posts.