ChatGPT and other chatbots are on the rise as more and more people make them part of their daily workflows. But there is a big plagiarism and AI content problem that is affecting creatives, publishers, and schools and colleges. This can be easily mitigated by having a tool that detects ChatGPT-generated content, and as things stand, OpenAI reportedly already has a tool to detect the same, catching students and employees who use the chatbot to get work done, but it has chosen not to release it yet.
Based on a report by The Wall Street Journal, OpenAI has indeed has a watermarking system that can help detect ChatGPT-made content, but the company has chosen to hold it for now, based on how internally everyone is mixed about it.
Also Read: Colourful Apple Watch made using plastic, designed for kids, could be in the works - What we know
ChatGPT has now become a tool that many school and college-going students use to get assignments done. This has created a big plagiarism problem, and forced teachers to figure out ways to detect it. Having a watermarking system is going to solve this. OpenAI already has a system, and in a blog update today, OpenAI confirmed, “Our teams have developed a text watermarking method that we continue to consider as we research alternatives.”
The AI giant also confirmed that the tool has several shortcomings, with the tool being less effective when using translation systems, rewriting using another generative model, and even more. More importantly, OpenAI notes that having this system introduced could “stigmatise” the use of ChatGPT as a writing tool among non-native English speakers, and this where things get tricky for the company.
Also Read: iPhone users to get these 5 transformative Apple Intelligence features with iOS 18
It is simple: if more users are detected using OpenAI tools, it would make people hesitant to use ChatGPT, and hence hurt the company. On the other hand, implementing the same could be a plus for others, especially if OpenAI
Read more on tech.hindustantimes.com