The Defense Department's top artificial intelligence official said the agency needs to know more about AI tools before it fully commits to using the technology and urged developers to be more transparent.
Craig Martell, the Pentagon's chief digital and artificial intelligence officer, wants companies to share insights into how their AI software is built — without forfeiting their intellectual property — so that the department can “feel comfortable and safe” adopting it.
AI software relies on large language models, or LLMs, which use massive data sets to power tools such as chatbots and image generators. The services are typically offered without showing their inner workings — in a so-called black box. That makes it hard for users to understand how the technology comes to decisions or what makes it get better or worse at its job over time.
“We're just getting the end result of the model-building — that's not sufficient,” Martell said in an interview. The Pentagon has no idea how models are structured or what data has been used, he said.
Companies also aren't explaining what dangers their systems could pose, Martell said.
“They're saying: ‘Here it is. We're not telling you how we built it. We're not telling you what it's good or bad at. We're not telling you whether it's biased or not,'” he said.
He described such models as the equivalent of “found alien technology” for the Defense Department. He's also concerned that only a few groups of people have enough money to build LLMs. Martell didn't identify any companies by name, but Microsoft Corp., Alphabet Inc.'s Google and Amazon.com Inc. are among those developing LLMs for the commercial market, along with startups OpenAI and Anthropic.
Martell is inviting industry and academics
Read more on tech.hindustantimes.com