A misconfigured link accidentally leaked access to 38TB of Microsoft data, opening up the ability to inject malicious code into its AI models.
The finding comes from cloud security provider Wiz, which recently scanned the internet for exposed storage accounts. It found a software repository on Microsoft-owned GitHub dedicated to supplying open-source code and AI models for image recognition.
On the affected GitHub page, a Microsoft employee had created a URL, enabling visitors to the software repository to download AI models from an Azure storage container. “However, this URL allowed access to more than just open-source models,” Wiz said in its report. “It was configured to grant permissions on the entire storage account, exposing additional private data by mistake.”
Scans from Wiz Research also indicated the Azure storage container held 38TB of data, including “passwords to Microsoft services, secret keys, and over 30,000 internal Microsoft Teams messages from 359 Microsoft employees.”
The URL to the storage container was also created using a powerful “Shared Access Signature” or SAS token, which gave anyone visiting the link—including potential attackers—the ability to view, delete, or overwrite those files.
“This is particularly interesting considering the repository’s original purpose: providing AI models for use in training code,” Wiz said. “Meaning, an attacker could have injected malicious code into all the AI models in this storage account, and every user who trusts Microsoft’s GitHub repository would’ve been infected by it.”
Wiz reported this to Microsoft in June, and the company promptly plugged the leak. "No customer data was exposed, and no other internal services were put at risk because of this
Read more on pcmag.com