ChatGPT is a gossip. Google’s Bard, too, and maybe Bing AI. What you get out of them depends on all the information that went in. And that’s precisely the problem. Why? Because everything you ask them, tell them, or prompt them with becomes input for further training. The question you ask today may inform the answer someone gets tomorrow. That’s why you should be very, very careful what you say to an AI.
Is it really such a problem if your prompts and queries get recycled to inform someone else’s answers? In a word, yes. You could get in trouble at work, as several Samsung engineers found out when they used ChatGPT to debug some proprietary code. Another Samsung employee took advantage of ChatGPT’s ability to summarize text…but the text in question came from meeting notes containing trade secrets.
Here’s a simple tip: DO NOT use AI on any work-related project without checking your company’s policy. Even if your company has no policy, think twice, or even three times, before you put anything work-related into an AI. You don’t want to become infamous for triggering the privacy fiasco that spurs your company into creating such a policy.
Be careful with your own unique content as well. Do you write novels? Short stories? Blog posts? Have you ever used an AI helper to check the grammar in a rough draft, or slim down a work in progress to a specific word count? It’s really convenient! Just don’t be surprised if bits of your text show up in someone else’s AI-generated article before yours even gets to publication.
Maybe you don’t do anything with the current AI services beyond prompting them to tell jokes or make up stories. You’re not contributing much to the overall knowledge base, but your queries and prompts become part of
Read more on pcmag.com