As teachers tap tools to detect whether students are using ChatGPT to cheat, OpenAI says don’t bother: The tools don’t reliably work.
The company issued the advice in a new FAQ instructing educators about the use of ChatGPT in schools, including the potential pitfalls in trying to detect AI-written text.
A number of tools have emerged to address how AI-powered chatbots can help students cheat on their homework assignments and tests. But according to OpenAI, depending on a tool to suss out AI-written text from a student’s work is loaded with problems.
“While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and human-generated content,” the company wrote in the FAQ.
OpenAI added that “one of our key findings was that these tools sometimes suggest that human-written content was generated by AI.” For example, the company’s own investigation found AI detector tools could mistakenly flag human-written text from the playwright William Shakespeare and the Declaration of Independence as AI-generated.
“There were also indications that it could disproportionately impact students who had learned or were learning English as a second language and students whose writing was particularly formulaic or concise,” the company added.
The results can risk causing a teacher to falsely accuse a student of cheating with ChatGPT. OpenAI also noted that some teachers may be resorting to entering text that they suspect is AI-generated into ChatGPT and asking whether it wrote the content. But this approach is also flawed because the chatbot program has limited memory, and can’t recall conversation histories from other users. OpenAI
Read more on pcmag.com