We could all use our own dedicated, custom-built chatbot, right? Well, rejoice because Microsoft's Copilot Studio is a handy tool for the less technical (those of us who don't dream in Fortran) to create their own chatbot. The idea is to make it easy for most businesses and organisations to knock up a chatbot based on their internal documents and data.
You could imagine a game dev using a chatbot to help gamers ask questions about everything from how to complete a game to applying the best settings and fixing technical issues. There is, inevitably, a catch, however.
According to Zenity, an AI security specialist, Copilot Studio and the chatbots it creates are a security nightmare (via The Register). Zenity CTO Michael Bargury hosted a recent session at the Black Hat security conference, digging into the horrors that unfold if you allow Copilot access to data to create a chatbot.
Apparently, it's all down to Copilot Studio's default security settings, which are reportedly inadequate. Put another way, the danger is that you use that super-easy Copilot Studio tool to create a super-useful tool that customers or employees can use to query using natural language, only to find it opens up a great big door to exploits.
Bargury demonstrated how a bad actor can place malicious code in a harmless-looking email, instruct the Copilot bot to «inspect» it, and—presto—malicious code injection achieved.
Another example involved Copilot feeding users a fake Microsoft login page where the victim's credentials would be harvested, all displayed within the Copilot chatbot itself (via TechTarget).
Moreover, Zenity claims the average large enterprise in the US already has 3,000 such bots up and running. Scarily, it claims 63% of them are are discoverable online. If true, that means your average Fortune 500 outfit has about 2,000 bots ready and willing to spew out critical, confidential corporate information.
Keep up to date with the most important stories and the best deals, as picked by
Read more on pcgamer.com