As big tech companies ready their excursion into smart glasses, a similar playbook is cementing, and that playbook is looking a lot like the one already set out by Meta and its Ray-Ban-branded AI glasses. Hardware from companies like Google, and potentially Samsung and Apple, seems to center around a few main key components. You’ve got cameras, some kind of AI/computer vision, speakers, a voice assistant, navigation, maybe a screen, and, of course, a streamlined way to use generative AI for faking real photos—wait, what?
In a recent demo of its upcoming smart glasses, which are set to launch sometime this year, Google’s Dieter Bohn showed off a few capabilities. While most of them are pretty par for the course for the smart glasses field (using computer vision to get directions to places or parse stuff in your surroundings), one feature in particular is not something I’ve seen yet.
Here's the video of the Android XR demo we showed last week at MWC 🙂 Couple things that stand out to me: how well Gemini can handle vague and complex queries and how the glasses work with apps on the phone. 😎
More details here! https://t.co/NKA1rKE1rq pic.twitter.com/uXQGwvl7Tr
— Dieter Bohn (@backlon) March 12, 2026
By linking the smart glasses to Google’s image generator, Nano Banana, Bohn shows how you can instruct them to doctor up an image on the fly. In the video demo, Bohn asks Gemini to take a picture of people in the room using the smart glasses, but then superimpose them over the “really cool church in Barcelona that I forget the name of.” Based on the demo, it seems to do exactly that, taking people in the room and using AI to essentially Photoshop them in, so it looks as though they’re standing in front of the Sagrada Familia in Barcelona.
It’s not a trick we haven’t seen before. Google has been leaning into AI photography for years now with its Pixel phones. But it’s a first for the smart glasses form factor, being able to (theoretically) shorten the friction between taking a picture and using AI to alter the ever-loving f*ck out of it. And even if we’ve seen Google lean into AI photography in the past, it’s certainly moving the needle just a bit further in that direction—the direction where whether a photo is real or fake apparently doesn’t matter.
For context, other smart glasses can kind of do this already, but not to this degree. For instance, you can ask Meta AI—the AI inside the Ray-Ban Meta AI glasses and the Meta Ray-Ban Display—to “re-style” a photo to make it more like an oil painting or cartoonish, but it’s not meant for recreating anything photorealistic. Meta’s version is more focused on how to turn your images into AI slop and less fixated on how you can basically fake an image altogether.
How well Google’s Gemini to Nano Banana pipeline works on smart glasses is a question mark, since this is just a short, well-planned demo. Google even emphasizes that the time in the video is edited, which tells me the command either didn’t work as planned initially or it took too long for Google’s taste. Either way, it’s a new trick in smart glasses that we can look out for, and I guess theoretically great news for anyone who doesn’t care about photos representing reality anymore.
Source: Gizmodo