Google has showcased its latest smartglasses prototype powered by the next-generation Gemini AI model, aiming to provide users with real-time information about their environment. The device uses a voice-based interface and integrates the Gemini model into existing Google services like search, Maps, and Lens.
The glasses utilize Gemini 2.0, which enables AI “agents” that can carry out tasks on behalf of a user, such as shopping or booking reservations. To enhance the experience, Google has also made updates to Project Astra, improving latency and natural language understanding.
In a demo video, users can access information about their surroundings, including park names, bus routes, and door codes. The device also summons agents using smartphones, pointing them at objects to gather more information.
Google believes its smartglasses offer an intuitive form factor for experiencing AI, with CEO Sundar Pichai stating that the new agent capabilities bring the company closer to its vision of a universal assistant. While no timeline has been announced for wider release, Google plans to provide access to the glasses to early testers soon.
Source: https://www.forbes.com/sites/richardnieva/2024/12/11/googles-gemini-ai-glasses