OpenAI has shut down its AI text-detection tool, citing a "low rate of accuracy" when determining if written work was penned by a human or ChatGPT, its AI chatbot.
"We are working to incorporate feedback and are currently researching more effective provenance techniques," OpenAI says(Opens in a new window).
The company is working on an improved version for text and says it has "made a commitment" to do the same for audio and visual content, such as those generated with its Dall-E image generator.
OpenAI first released the text-detection tool in January 2023, citing the importance of having systems that can sniff out false claims generated by AI. OpenAI published a paper(Opens in a new window) the same month, written in collaboration with Stanford University and Georgetown University faculty, outlining the risk of automated misinformation campaigns.
"Generative language models have improved drastically, and can now produce realistic text outputs that are difficult to distinguish from human-written content," the paper reads. "For malicious actors, these language models bring the promise of automating the creation of convincing and misleading text."
Such malicious actors could range from students attempting to cheat on an assignment to election interference and everything in between. The paper admits there is little anyone can do to fully prevent AI's influence on the human world now that the technology is developed and publicly released, saying "no reasonable mitigation can be expected to fully prevent the threat of AI-enabled influence operations."
AI detection tools could be a start, although OpenAI's tool was limited and inaccurate from the start. It required someone to manually input a piece of text at least 1,000
Read more on pcmag.com