We’re definitely in the middle of an "AI summer," a period where both scientists and the general public get very excited about the possibilities of computer learning. Generative AI models such as ChatGPT and Midjourney are allowing more people than ever before to try this powerful type of tool. But that exposure is also revealing deep flaws in how AI programs are written and trained on data, and that could have major repercussions for the industry.
Here are our picks for the ten biggest flaws in current generative AI models.
If we were to view AI algorithms as though they were living beings, they’re kind of like dogs—they really want to make you happy, even if that means leaving a dead raccoon on the front porch. Generative AI needs to create a response to your query, even if it isn’t capable of giving you one that is factual or sensible. We’ve seen this in examples from ChatGPT, Bard, and others: If the AI doesn’t have enough actual information in its knowledge base, it fills in gaps with stuff that sounds like it could be correct, according to its algorithm. That’s why when you ask ChatGPT about me, it correctly says I write for PCMag, but it also says that I wrote Savage Sword Of Conan in the 1970s. I wish!
Another significant issue is with the datasets these tools are trained on. They have a cutoff date. Generative AI models are fed massive amounts of data, and they use it to assemble their responses. But the world is constantly changing, and it doesn’t take long for the training data to become obsolete. Updating AI is a massive process that has to be done from scratch each time because the way data is interconnected in the source means that adding and weighting additional information isn’t possible to do piecemeal.
Read more on pcmag.com