Daniela Amodei, a co-founder of artificial intelligence company Anthropic, called a meeting with AI leaders at the White House last week an “incredible first step” in the effort to ensure the technology is safe.
In a conversation with Bloomberg Television on Monday, Amodei stressed the need for “communication early and often” with policymakers, academics and other groups in the US as well as abroad.
Amodei said that the startup, which has billed itself as making a safer kind of chatbot, has focused on ensuring the latest version of its technology, called Claude 2, returns harmless answers — even to harmful questions. “Of course no model on the market is 100% immune from jailbreaks or is perfectly safe, but really our goal is to try and provide a model that is as safe as possible,” she said.
San Francisco-based Anthropic was founded in 2021 by a handful of former OpenAI staffers, including Amodei and her brother, Dario Amodei. The startup has publicly urged caution about the race to develop and release AI systems, and the potential impacts of those tools. In May, the company said it raised $450 million in funding, bringing its total raised so far to more than $1 billion.
Read more on tech.hindustantimes.com