In September 1955, John McCarthy, a young assistant professor of mathematics at Dartmouth College, boldly proposed(Opens in a new window) that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."
McCarthy called this new field of study "artificial intelligence" and suggested that a two-month effort by a group of 10 scientists could make significant advances in developing machines that could "use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."
At the time, scientists optimistically believed we would soon have thinking machines doing any work a human could do. Now, nearly seven decades later, advances in computer science and robotics have helped us automate many of the tasks that previously required the physical and cognitive labor of humans. But true artificial intelligence, as McCarthy conceived it, continues to elude us.
A great challenge with artificial intelligence is that it's a broad term, and there's no clear agreement on its definition.
McCarthy had proposed AI would solve problems the way humans do: "The ultimate effort is to make computer programs that can solve problems and achieve goals in the world as well as humans," McCarthy said.
As mentioned, McCarthy proposed AI would solve problems the way humans do: "The ultimate effort is to make computer programs that can solve problems and achieve goals in the world as well as humans," McCarthy said(Opens in a new window).
Andrew Moore, Dean of Computer Science at Carnegie Mellon University, provided a more modern definition of the term in a 2017 interview with Forbes(Opens in a new window): "Artificial
Read more on pcmag.com