Even the smartest of boffins can trip up sometimes, and that's exactly what happened after a member of Microsoft's AI research team accidentally exposed 38TB of sensitive internal data after misconfiguring a link.
Wiz, a cloud security company that routinely looks for vulnerabilities or exposures of cloud-hosted data detailed the exposure on its blog (via ITWire). It found a GitHub repository belonging to Microsoft’s AI research division, hosting open-source code and AI models for image recognition. But that's not all Wiz found.
A configuration error allowed anyone access the entire storage account, and this data included two complete PC backups belonging to Microsoft employees. According to Wiz, the data included «sensitive personal data, including passwords to Microsoft services, secret keys, and over 30,000 internal Microsoft Teams messages from 359 Microsoft employees.»
Furthermore, the files weren't read-only. They could be rewritten or deleted at will. In fairness to Microsoft — and the employees, access to the files wasn't completely public. Access was granted via an Azure sharing feature called a SAS token, which is a shareable link, but in this case it granted full access. Anyone with that link, which would include users looking to access the AI source code, would have had access.
Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.
What's worse is that the data has been exposed since 2020. Microsoft was made aware of the exposure in June this year, meaning the data was available for three years.
Microsoft posted a lengthy statement on its own blog, stating «NoRead more on pcgamer.com