Mobile
A Chatbot from the 1960s has thoroughly beaten OpenAI's GPT-3.5 in a Turing test, because people thought it was just 'too bad' to be an actual AI
A small research group recently examined the performance of 25 AI 'people', using two large language models created by OpenAI, in an online Turing test. None of the AI bots ultimately passed the test, but all the GPT 3.5 ones were so bad that a chatbot from the mid-1960s was nearly twice as successful as passing off as a human. Mostly because real people didn't believe it was really AI.