Skip to content

Tech Giant Places Bets on Smart Spectacles Again

Tech giant makes another move in wearable tech: Google re-enters smart glasses game

Google reveals plans for AI-integrated smart glasses at their developer conference.
Google reveals plans for AI-integrated smart glasses at their developer conference.

Google resumes its efforts on smart glasses development - Tech Giant Places Bets on Smart Spectacles Again

Google Returns to Smart Glasses Market with AI-Powered Prototype

Google, the tech giant, has unveiled a new prototype for intelligent glasses, marking its re-entry into the Smart Glasses market. Over five years after the underwhelming performance of their first Smart Glasses offering, Google Glass, the company is now emphasizing AI, an exclusive operating system, and a partnership with Samsung to regain ground lost to competitors such as Meta and Ray-Ban.

At this year's I/O developer conference and a TED talk held in Vancouver, Google debuted their latest Smart Glasses prototype. The glasses operate on the Android XR operating system and incorporate AI Gemini 2.0. They are capable of recognizing the wearer's surroundings, speaking to them, translating languages in real-time, remembering the location of lost items, and more.

The complex processing involved in these functions occurs not within the glasses themselves but through pairing with a smartphone for access to the necessary computing power and apps. With a simple and understated design featuring a microdisplay, camera, microphones, and speakers, the glasses aim to blend seamlessly into everyday life.

Contextual AI Functions & Features

With the integration of the AI Gemini, the glasses offer a variety of AI-powered features. The glasses can detect objects, translate spoken language in real-time, and provide contextual location information. One notable feature, "Memory," enables users to store and recall visual information, helping find misplaced items.

Google's primary objective is to enable intuitive and context-aware interaction. By request, users can receive digital services such as messages, reminders, navigation directions, or translations in their field of view. The AI-based control is voice-activated, allowing users to access these services without interacting with displays or touchscreens.

Android XR as a Foundation for Extended Reality

Android XR is a new operating system developed in collaboration with Samsung and Qualcomm. It serves as the technical backbone for Google's AR and XR strategy (Augmented Reality and Extended Reality) and supports a wide range of XR applications, including smart glasses and future mixed-reality headsets, such as the codenamed "Project Moohan." Google aims to create a unified and open base for the upcoming wave of wearable devices with extended reality features.

Privacy and user control were prioritized in the development of these new Smart Glasses. Further details about the market launch, pricing, and exact release date are yet to be revealed. Tests are planned to start in controlled environments, possibly resulting in the glasses becoming part of the Pixel family, perhaps available as "Pixel Glasses" alongside the Pixel 10 this fall.

In summary, Google's Smart Glasses boast AI and AR capabilities designed to appeal to consumers and stand out from competitors. Leveraging Android XR, Gemini AI, aesthetic partnerships, and a focus on functionality, Google is poised to make a strong impact in the Smart Glasses market against prominent rivals such as Meta and Ray-Ban.

I'm not going to be able to afford these smart-home devices and gadgets anytime soon, given the high-end technology like artificial-intelligence and extended reality they incorporate. Even if Google were to release their AI-powered Smart Glasses, marketed as "Pixel Glasses," at an affordable price point, I would still need to consider the associated costs of the necessary smartphone for the glasses to function effectively.

Read also:

    Latest