The House of Lords communications and digital committee met today with Rob Sherman, VP of policy and deputy chief privacy officer for Meta, and Owen Larter, the director of global responsible AI public policy at Microsoft, to discuss large language models and some of the wider implications of AI. In a far-ranging discussion in which many words were said and not a lot of actual information conveyed, one particular tidbit caught our attention.
When asked directly by the chair of the committee, Baroness Stowell of Beeston, as to whether either company was capable of recalling an AI model if it had been «identified as unsafe,» or stopping it being deployed any further and how that might work, Rob Sherman gave a somewhat rambling response:
«I think it depends on what the technology is and how it's being used … one of the things that is quite important is to think about these things upfront before they're released … there are a number of other measures that we can take, so for example, once a model is released there's a lot of work that what we call a deployer of the model has to do, so there's not only one actor that's responsible for deploying this technology…
»When we released Llama, [we] put out a responsible use guide that talks about the steps that a deployer of the technology can do to make sure that it's used safely, and that includes things like what we call fine tuning, which is taking the model and making sure it's used appropriately…and then also filtering on the outputs to make sure that when somebody is using it in an end capacity, that the model is being used responsibly and thoughtfully."
Microsoft's Owen Larter, meanwhile, did not respond at all, although in fairness the discussion was wide ranging and
Read more on pcgamer.com