No technology since nuclear fission will shape our collective future quite like artificial intelligence, so it's paramount AI systems are safe, secure, trustworthy and socially responsible. But unlike the atom bomb, this paradigm shift has been almost completely driven by the private tech sector, which has been resistant to regulation, to say the least. Billions are at stake, making the Biden administration's task of setting standards for AI safety a major challenge.
To define the parameters, it has tapped a small federal agency, The National Institute of Standards and Technology. NIST's tools and measures define products and services from atomic clocks to election security tech and nanomaterials.
At the helm of the agency's AI efforts is Elham Tabassi, NIST's chief AI advisor. She shepherded the AI Risk Management Framework published 12 months ago that laid groundwork for Biden's Oct. 30 AI executive order. It catalogued such risks as bias against non-whites and threats to privacy.
We are on WhatsApp Channels. Click to join.
Iranian-born, Tabassi came to the U.S. in 1994 for her master's in electrical engineering and joined NIST not long after. She is principal architect of a standard the FBI uses to measure fingerprint image quality.
This interview with Tabassi has been edited for length and clarity.
Q: Emergent AI technologies have capabilities their creators don't even understand. There isn't even an agreed upon vocabulary, the technology is so new. You've stressed the importance of creating a lexicon on AI. Why?
A: Most of my work has been in computer vision and machine learning. There, too, we needed a shared lexicon to avoid quickly devolving into disagreement. A single term can mean different things to different people. Talking past each other is particularly common in interdisciplinary fields such as AI.
Q: You've said that for your work to succeed you need input not just from computer scientists and engineers but also from attorneys, psychologists, philosophers.
Read more on tech.hindustantimes.com