Ever since Alan Turing's “imitation game,” we've been acutely aware of the importance of measuring the capabilities of computers against our own miraculous brains. The British pioneer's method, outlined in 1950, is primitive today, but it sought to answer a persistent question: How will we tell when a machine has become as (or more) intelligent than a human being?
Defining such progress is imperative for productive conversations about artificial intelligence. Specifically, the question of what can be considered artificial general intelligence — a “mind” as adaptable as our own — needs to be considered using a set of shared parameters. Currently, the term lacks precise definitions, making predictions of AGI's arrival and impact simultaneously both unnecessarily alarmist or insufficiently concerned.
Consider the hopeless spread of predictions on AGI. Earlier this year, the preeminent AI researcher Geoffrey Hinton predicted “without much confidence” that AGI could be present within five to 20 years. One attempt to collate a sample of approximately 1,700 experts offered timing estimates from next year to never. One reason for the chasm is that we haven't decided collectively what we're even talking about. “If you were to ask 100 AI experts to define what they mean by ‘AGI,' you would likely get 100 related but different definitions,” notes a recent paper from a team at DeepMind, the AI unit within Google.
One of the paper's co-authors, Shane Legg, is credited with popularizing the AGI term. Now he and his team are seeking to set up a sensible framework with which to measure and define the technology — a taxonomy that can be used to help assuage or heighten fears and offer straightforward context to non-experts and
Read more on tech.hindustantimes.com