The launch of ever-capable large language models (LLMs) such as GPT-3.5 has sparked much interest over the past six months. However, trust in these models has waned as users have discovered they can make mistakes – and that, just like us, they aren't perfect.
An LLM that outputs incorrect information is said to be “hallucinating”, and there is now a growing research effort towards minimising this effect. But as we grapple with this task, it's worth reflecting on our own capacity for bias and hallucination – and how this impacts the accuracy of the LLMs we create.
By understanding the link between AI's hallucinatory potential and our own, we can begin to create smarter AI systems that will ultimately help reduce human error.
How people hallucinate
It's no secret people make up information. Sometimes we do this intentionally, and sometimes unintentionally. The latter is a result of cognitive biases, or “heuristics”: mental shortcuts we develop through past experiences.
These shortcuts are often born out of necessity. At any given moment, we can only process a limited amount of the information flooding our senses, and only remember a fraction of all the information we've ever been exposed to.
As such, our brains must use learnt associations to fill in the gaps and quickly respond to whatever question or quandary sits before us. In other words, our brains guess what the correct answer might be based on limited knowledge. This is called a “confabulation” and is an example of a human bias.
Our biases can result in poor judgement. Take the automation bias, which is our tendency to favour information generated by automated systems (such as ChatGPT) over information from non-automated sources. This bias can lead us to miss errors and
Read more on tech.hindustantimes.com