Knowledge advantage can save lives, win wars and avert disaster. At the Central Intelligence Agency, basic artificial intelligence – machine learning and algorithms – has long served that mission. Now, generative AI is joining the effort.
CIA Director William Burns says AI tech will augment humans, not replace them. The agency's first chief technology officer, Nand Mulchandani, is marshaling the tools. There's considerable urgency: Adversaries are already spreading AI-generated deepfakes aimed at undermining U.S. interests.
A former Silicon Valley CEO who helmed successful startups, Mulchandani was named to the job in 2022 after a stint at the Pentagon's Joint Artificial Intelligence Center.
Among projects he oversees: A ChatGPT-like generative AI application that draws on open-source data . Thousands of analysts across the 18-agency U.S. intelligence community use it. Other CIA projects that use large-language models are, unsurprisingly, secret.
This Associated Press interview with Mulchandani has been edited for length and clarity.
Q: You recently said generative AI should be treated like a “crazy, drunk friend.” Can you elaborate?
A: When these generative AI systems “hallucinate,” they can sometimes behave like your drunk friend at a bar who can say something that pushes you outside your normal conceptual boundary and sparks out-of-the box thinking. Remember that these AI-based systems are probabilistic in nature, so they are not precise . So for creative tasks like art, poetry, and painting these systems are excellent. But I wouldn't yet use these systems for doing precise math or designing an airplane or skyscraper - in those activities “close enough” doesn't work. They can also be biased and narrowly focused, which I call the “rabbit hole” problem.
Q: The only current use of a large-language model at enterprise scale I'm aware of at CIA is the open-source AI, called Osiris, that it created for the entire intelligence community. Is that correct?
A:
Read more on tech.hindustantimes.com