Missed the GamesBeat Summit excitement? Don't worry! Tune in now to catch all of the live and virtual sessions here.
We still have a lot to figure out. That was my impression of a panel at our Transform 2023 event yesterday that drilled into the ethicsof generative AI.
The panel was moderated by Philip Lawson, AI policy lead at the Armilla AI | Schwartz Reisman Institute for Technology and Society. It included Jen Carter, global head of technology at Google.org, and Ravi Jain, chair of the Association for Computing Machinery (ACM) working group on generative AI.
Lawson said that the aim was to dive deeper into better understanding the pitfalls of generative AI and how to successfully navigate its risks.
He noted that the underlying technology and Transformer-based architectures have been around for a number of years, though we’re all aware of the surge in attention in the last eight to 10 months or so with the launch of large language models like ChatGPT.
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Carter noted that creators have been building on advances in AI since the 1950s and neural networks offered great advances. But the Transformer infrastructure has been a significant advance, starting around 2017. More recently, it’s taken off again with ChatGPT, giving large language models so much more breadth and depth to what they can do in response to queries. That’s been truly exciting, she said.
“There’s a tremendous amount of hype,” Jain said. “But for once, I think the hype is really worth it. The speed of development that I’ve seen in the last year — or eight months — in this area has
Read more on venturebeat.com