Google has showcased sleek smart glasses with a Heads-Up Display (HUD) at TED2025, describing them as “conceptual hardware”. The glasses feature a camera, microphones, and speakers, along with a high-resolution display that projects information into the user’s field of view. The device focuses on Google’s Gemini multimodal conversational AI system, which can remember what it sees and respond accordingly.
In a 15-minute demo, Google’s Android XR lead Shahram Izadi showcased various capabilities of Gemini, including:
– Basic Multimodal: Gemini responds to visual and audio inputs with accurate answers.
– Rolling Contextual Memory: Gemini retains information from previous conversations and uses it to answer new questions.
– Complex Multimodal: Gemini understands complex visual content, such as diagrams in books.
– Translation: Gemini translates text between different languages without requiring mode changes.
– Multi-Language Support: Gemini responds in the same language used by the user.
– Navigation: Gemini provides turn-by-turn instructions and a minimap for navigation.
While Google hasn’t announced any specific product timeline, they’ve been working on miniaturizing their smart glasses. Samsung is reportedly working on a Ray-Ban Meta glasses competitor with similar AI capabilities. Apple is also expected to release its own smart glasses in 2027. The market is expected to become increasingly competitive as tech giants battle for control of multimodal AI and HUD displays.
Source: https://www.uploadvr.com/google-smart-glasses-with-hud-ted2025-demo