The latest generative AI models are capable of astonishing, magical human-like output. But do they actually understand anything?
That'll be a big, fat no according to the latest study from MIT (via Techspot). More specifically, the key question is whether the LLMs or large language models at the core of the most powerful chatbots are capable of constructing accurate internal models of the world.
And the answer that MIT researchers largely came up with is no, they can't. To find out, the MIT team developed new metrics for testing AI that go beyond simple measures of accuracy in responses and hinge on what's known as deterministic finite automations, or DFAs.
A DFA is a problem with a sequence of interdependent steps that rely on a set of rules. Among other tasks, for the research navigating the streets of New York City was chosen.
The MIT team found some generative AI models are capable of very accurate turn-by-turn driving directions in New York City, but only in ideal circumstances.