I really want to take Elon Musk's claims to be creating «a maximum truth-seeking AI» called «TruthGPT» at face value. I really want this all to come from a place of honesty and integrity and trying to create a better, fairer world. I really want to believe that in whatever form it eventually takes, Musk's new AI venture will legitimately create «an AI that cares about understanding the universe.»
I'm ready to get on board, but he's just making it so damned hard.
Especially when the source for this new information comes from a Tucker Carlson interview on Fox News (via TechCrunch(opens in new tab)). It's doubly tough when the interviewer, in trailing the interview spot on Fox and Friends(opens in new tab), commends Musk's sense of humour and the fact he doesn't take himself too seriously then uses this phrase with a straight face:
«People who take themselves too seriously, like Stalin… or Chuck Schumer, make me uncomfortable, and should make us all uncomfortable.»
That's some powerful deadpanning, Tucks. At least give us a wink at the end, eh?
Anyways, that's just an arresting side note to Musk effectively announcing on Fox News that he's committed to creating his alternative to OpenAI and Google's own generative pre-trained (GPT) AIs.
«I’m going to start something which I call TruthGPT,» Musk tells Carson. «Or a maximum truth-seeking AI that tries to understand the nature of the universe.
»And I think this might be the best path to safety in the sense that an AI that cares about understanding the universe is unlikely to annihilate humans because we are an interesting part of the universe."
I mean, that sounds lovely. A kind, thoughtful, caring artificial intelligence that is at best «unlikely» to have us all inhumed. Of
Read more on pcgamer.com