Google Lens is becoming part of Chrome’s native AI interface

1 month ago 2
ARTICLE AD

Google is testing a new AI-powered side panel experience in Chrome

Google Lens brings visual searches to the phone. Google

Google is trying out a major tweak to how AI works inside Chrome, specifically by mashing up Google Lens with the browser’s native AI side panel. Right now, this is popping up in Chrome Canary – the experimental playground where Google tests new features before they go mainstream.

The big shift here is that Lens isn’t just acting as a standalone tool for looking up images anymore. Instead, it now triggers Chrome’s full AI interface right in the side panel, blending image search, page reading, and chat into one unified spot.

In this new setup, activating Lens does more than just highlight a picture. It opens that AI panel on the right, giving you a chat box, suggested questions, and quick actions. Since the panel can “read” the webpage you are currently on, you can ask questions about the article without ever leaving the tab.

A MacBook with Google Chrome loaded.Firmbee / Unsplash

In testing, the AI handles summaries and context pretty instantly, keeping everything in a single thread. It also ties into Chrome’s broader AI system, meaning your visual searches and chat sessions are finally living in the same history, reinforcing the idea that Google wants search, vision, and chat to feel like one continuous experience.

Why it matters and what comes next

Why this is important: This update is a clear sign that Google wants Chrome to be more than just a passive window to the web; they want it to be an active workspace. By fusing Lens with “AI Mode,” they are positioning the browser as a smart assistant that hangs out alongside whatever you are reading. It stops being a separate tool you have to switch to and starts being a helper that actually understands the context of your screen.

Gemini in the dock of a Chromebook.Nadeem Sarwar / Digital Trends

Why you should care: Ideally, this means less tab clutter and faster answers. Whether you are deep in a research hole, online shopping, or reading a complex article, having an AI that can see what you see – and explain it – without making you leave the page is a massive workflow upgrade. It feels like a natural step toward the “assistant-first” browsing experience Google is pushing on Android and Search.

What’s next: This is still in the “rough draft” phase in Canary, and the interface is clearly a work in progress. However, the way it links the side panel, the address bar, and your task history suggests Google is serious about building a unified AI layer across Chrome. If it survives testing, this Lens-powered panel could fundamentally change the rhythm of how we search and read on the web.

Moinak Pal

Moinak Pal is has been working in the technology sector covering both consumer centric tech and automotive technology for the…

I found a Mac tool that you’ll love as a sleeker dock with extra tricks

The Mac's dock has remained static over the years. Loopty replaces it with a lot practical pizzazz.

Loopty app switcher for Mac

The shift to macOS Tahoe introduced a whole bunch of upgrades to core Mac systems. Spotlight, in particular, got some noteworthy tweaks such as support for custom shortcuts and an improved AI-powered search system. The disappearance of LaunchPad, however, proved to be a controversial change.

Apple also didn’t pay attention to deeper cross-app integrations that have made apps such as RayCast a hot favorite in the user community. The new Spotlight wants to be the hub of your core Mac activities, but not without its fair share of clutter and a few big omissions.

Read more

AMD to play safe at CES 2026, but it may still deserve your attention

AMD’s CES 2026 keynote is shaping up to be far more about AI strategy than shiny new consumer chips.

AMD CEO Dr. Lisa Su holding up a chip at Computex 2024.

For years, the Consumer Electronics Show (CES) has evolved from a consumer-electronics showcase to a global premier launchpad for chipmakers, turning the event into a key battleground for leadership in computing and AI hardware. The upcoming 2026 edition is expected to be no less. 

AMD has confirmed that President and CEO, Dr. Lisa Su will deliver the opening keynote on January 5, outlining the company’s AI vision across cloud, enterprise, edge, and consumer devices. While we aren’t expecting any major announcements like a new GPU generation or a surprise Zen 6 tease (though we can still dream), expect some important launches. 

Read more

ChatGPT gets major update (GPT-5.2) as OpenAI battles Google in AI arms race

OpenAI's GPT-5.2 upgrade boosts real-world productivity just as Google escalates the competition with its latest Deep Research model.

featured image OpenAI GPT-5.2

OpenAI has officially launched GPT-5.2, the latest iteration of its flagship AI model series and its answer to Google's Gemini 3. The new model is meant to be faster, smarter, and more helpful for the complex, real-world queries with improvements in reasoning and long-document processing.

It is rolling out to ChatGPT's paid subscribers as part of the Plus, Pro, Team, and Enterprise tiers, and developers via API. OpenAI provides GPT-5.2 in three models: GPT-5.2 Instant, GPT-5.2 Thinking, and GPT-5.2 Pro (is it just me, or does the naming sound similar to that of the Gemini models?).

Read more

Read Entire Article