Let's face it, humans have a natural tendency to fear the unknown. The Terminator, The Matrix, and movies like them have implanted the seed that AI is bad. Killer cyborgs and malevolent AI have become a part of the public consciousness. The rise of the AI industry has even led to extinction level concerns… from the AI industry.
But don't go stocking up on ammo and baked beans just yet. Michio Kaku, a professor of theoretical physics at City College of New York and CUNY Graduate Center believes current AI models are little more than glorified tape recorders.
Kaku was interviewed by CNN's Fareed Zakaria (via Business Insider). When asked for his thoughts on AI, Kaku said: «It takes snippets of what's on the web created by a human, splices them together, and passes it off as if it created these things, and people are saying: 'Oh my God, it's a human, it's humanlike.'» He goes on to say «AI cannot distinguish true from false».
I find it hard to argue with his logic. These chatbots can only give responses based on data they have on hand to give. If it doesn't have the recipe for a good Pad Thai, what use is it?
Large Language Models (LLMs) are essentially huge databases packed with data scraped from the internet. Statistical models and algorithms are used to analyze this data and generate human-like probability-based responses to queries.
All that data needs massive amounts of computing power, and that's why companies like Nvidia are raking in cash. LLMs are a natural fit for quantum computers due to their ability to process parallel data. If, or when we get functional and scalable quantum computers, AI tech should make an astronomical leap forward. I'll worry about Skynet or an army of Agent Smiths then.
How do we save the
Read more on pcgamer.com