AI sparks fears in finance, business, and law; Chinese military trains AI to predict enemy actions on battlefield with ChatGPT-like models; OpenAI's GPT store faces challenge as users exploit platform for 'AI Girlfriends'; Anthropic study reveals alarming deceptive abilities in AI models- this and more in our daily roundup. Let us take a look.
AI's growing influence triggers concerns in finance, business, and law. FINRA identifies AI as an "emerging risk," while the World Economic Forum's survey reveals AI-fueled misinformation as the primary near-term threat to the global economy. Financial Stability Oversight Council warns of potential "direct consumer harm," and SEC Chairman Gary Gensler highlights the peril to financial stability from widespread AI-dependent investment decisions. The World Economic Forum underscores AI's role in spreading fake news, citing it as the foremost short-term risk to the global economy, according to a Washington Post report.
We are now on WhatsApp. Click to join .
Chinese military scientists are training an AI, akin to ChatGPT, to predict the actions of potential enemy humans on the battlefield. The People's Liberation Army's Strategic Support Force reportedly utilizes Baidu's Ernie and iFlyTek's Spark, large language models similar to ChatGPT. The military AI processes sensor data and frontline reports, automating the generation of prompts for combat simulations without human involvement, according to a December peer-reviewed paper by Sun Yifeng and team, Interesting Engineering reported.
OpenAI's GPT store faces moderation challenges as users exploit the platform to create AI chatbots marketed as "virtual girlfriends," violating the company's guidelines. Despite policy updates, the
Read more on tech.hindustantimes.com