«So tell me about yourself.» To some, it's a dreaded phrase that ruins blind dates or job interviews right from the start. Don't you ever wish you had a little AI assistant in your ear feeding you the right lines to say in any social setting? One student from Stanford has designed just that: a way to use ChatGPT and an AR monocle to act as your own Cortana, helping out in your day-to-day like you're Master Chief.
Bryan Hau-Ping Chiang(opens in new tab) (spotted by Tom's Hardware(opens in new tab)) created a prototype digital assistant he calls an «operating system for your entire life.» Speech recognition software listens in to your conversation, feeds it to ChatGPT, and spits out a response that appears on the lens of an open-sourced AR monocle(opens in new tab) that clips onto your glasses. It can even recognize the faces of the people you're talking to.
Bryan has published a series of tweets illustrating the prototype's capabilities. The tech has been going by a couple of playful names, such as lifeOS or rizzGPT, but it all does roughly the same thing. One instance has it scan your friends' faces and then «bring up relevant details to talk about based on your texts with them.» It's all presented somewhat unseriously, but it's not hard to imagine this sort of tech actually being used someday in an AI assistant that can scan a stranger's face, identify them, and pull up facts and talking points based on their social media posts. That'd give you instant icebreakers, assuming you can get past the whole 'scanning people's faces without their consent' thing, don't mind interacting with other people through a language model proxy, and the latency isn't so high that you're standing there staring at them waiting for your
Read more on pcgamer.com