With this year's Google I/O, the company showcased its AI efforts, models, and features for its services across Android, Workspace, Photos, and other apps. However, during the event, the company announced a whole new AI agent called, “Project Astra,” a camera-based chatbot which can be used in everyday life. Demis Hassabis, CEO of Google DeepMind showcased a pre-recoded demo video of how the real-time, early version of multimodal AI assistant will function. Know more about what is Project Astra and how it works.
Google showcased the future of AI assistants with the early version of Project Astra, a multimodal AI assistant which has the ability to see the world through your camera lens. The new AI agent is powerful and more advanced than the current version of the Gemini AI model. Hassabis highlighted that he wants the AI agent to become the “universal assistant” with its advanced capabilities. The AI chatbot with the help of a camera viewfinder has the ability to analyse and understand the objects placed in front of it like humans.
Google said Project Astra possesses the power to provide real-time responses to user queries via text, audio or video inputs. As showcased in the pre-recorded video demo, it can make human-like conversations and answer questions smartly. The AI agent has skills for perception, comprehension, location awareness, and retrieval. It can easily identify the objects near the user's surroundings, access and process visual information from the real world, and even find missing objects from the room if the user prompts the chatbot.
Hassabis said, “To be truly useful, an agent needs to understand and respond to the complex and dynamic world just like people do — and take in and remember what it sees and hears to understand context and take action. It also needs to be proactive, teachable and personal, so users can talk to it naturally and without lag or delay." This is where Project Astra steps in to provide a more human-like experience to everyday
Read more on tech.hindustantimes.com