X

Google I/O 2025 Liveblog: Gemini AI Updates, Android XR, Spotlight on AI and more

Everything that's being announced at Google's midyear software event today at the Shoreline Amphitheatre.

Headshot of David Lumb
Headshot of David Lumb
David Lumb
James Martin/CNET

Google I/O, the company's annual event in Mountain View, California, at the Shoreline Amphitheatre, is underway, giving the world a look at upcoming Android, mobile and AI software. Last week's Android Show revealed several ways Google is refining the mobile operating system in the big Android 16 update coming later this year, which probably means we won't see much of that at Google I/O, clearing the runway for what the search giant has focused on: AI.

Google has been increasing the ways it's integrating its Gemini generative AI chatbot and services in its range of products, from Search to Maps to many of its apps. We do know that Gemini is coming to smartwatches in WearOS soon, and to cars in Android Auto in the coming months, so that may not be a focus. But we could also see more about extended reality at I/O, perhaps including more information about Project Moohan, the company's mixed reality headset it's making with Samsung. That's also loaded with AI, as CNET principle writer Scott Stein found when he got an early hands-on with the device.

Here's what our team on the ground is reporting from Google's annual software event.

Final 'AI' and 'Gemini' word count

By Katie Collins

How many times did Google use the word "Gemini" during its keynote? The company knows that people like to keep track, so it did it for us this time. 

Presenters said "Gemini" 95 times. "AI" meanwhile, was uttered a total of 92 times. And as a little bonus, "exciting" -- which Sundar Pichai said earlier in the keynote was his favorite word -- was included just five times.

End

By James Martin
google-io-2025-9495
James Martin/CNET

Sundar Pichai closes out the show

By Imad Khan

This year's "AI" counter shows that the name "Gemini" was said more times than AI. Pichai is ready to close out the show. He calls out how wildfires have been ravaging California. Pichai says FireSat is a new system from Google that uses satellite imagery and AI to help in detecting and fighting fires. This same tech can also help people prepare for hurricanes, as well. 

Whether it's autonomous vehicles or quantum computing, Pichai says this technology is years away, not decades. 

With that, the Google I/O 2025 keynote is over.

Everything announced

By James Martin
Everything Announced

Everything announced today at Google I/O 2025

James Martin/CNET

First brand partners revealed for Android XR glasses

By Tyler Graham

You'll be able to get your hands on Android XR soon, and you'll have some stylish choices when it comes to how you wear the next generation of AI-powered AR glasses. Google revealed the first brands to carry Android XR glasses will be Gentle Monster and Warby Parker, though more brand partners will be revealed in the future.

NBA player Giannis Antetokounmpo makes a cameo wearing Google's Android XR glasses

By Moe Long

Tech events tend to feature cameos by celebrities -- Sydney Sweeney made a brief, awkward appearance at Samsung Unpacked in 2024 -- and Milwaukee Bucks forward Giannis Antetokounmpo showed up at Google I/O 2025, rocking a pair of Android XR glasses. 

Google's Veo jumps to the big screen with Aronofsky-backed short 'Sestra' at Tribeca

By Nelson Aguilar

Google is rolling out Veo 3, its AI video generator, to filmmakers. To help demonstrate the newest version of its software, teaming up with Darren Aronofsky's production lab to try out the technology on three new shorts. The first is Eliza McNitt's "Sestra," a film about the director's birth. All the live-action performances are woven together with AI-generated clips from Veo, like cosmic flashes, microscopic worlds and even a newborn version of the director. The film premieres June 13 at the Tribeca Film Festival.

Google's lightweight Android XR Glasses

By Katie Collins
screenshot-2025-05-20-at-2-49-32pm.png
James Martin/CNET

For those of us who aren't inclined to don Project Moohan-style goggles, Google's lightweight glasses might be a preferable way to experience Android XR.

They come with a camera and microphones to allow Gemini to hear the world around you, as well as speakers so you can listen to the AI chat in your ear, or to music or video calls. As you might expect, the Android XR Glasses will work with your phone, giving you access to all of your apps while keeping your hands free.

Google showed off the glasses in a live demo, which mostly worked pretty well. Two Googlers conversing in Farsi and Hindi showed off the live translation tool, which allowed them to talk to one another in their respective languages with live translation to English.

Android XR Glasses 3D map view

By James Martin
Android XR Glasses 3D map

Android XR Glasses 3D map view

James Martin/CNET

Android XR Glasses

By James Martin
Android XR Glasses

Android XR Glasses

James Martin/CNET

Two new AI subscription tiers: Pro and Ultra

By David Lumb
Google AI Pro vs Google AI Ultra
James Martin/CNET

Goodbye, Google One AI Premium. Hello, two new renamed subscription tiers. First is Google AI Pro for $19.99 per month, which includes a list of AI models: Gemini app with 2.5 Pro and Veo 2, Flow with Veo 2, Whisk with Veo 2, NotebookLM with higher limits than the free version, Gemini in Google apps like Gmail, Docs and more, Gemini in Chrome and 2TB of cloud storage.

But for the "trailblazers" out there, only Google AI Ultra will suffice -- for a staggering $250 per month. At that eye-watering price, subscribers get the Gemini app with 2.5 Pro Deep Think and Veo 3, Flow with Veo 3, Whisk with the highest limits, the same Gemini in Gmail, Docs and Chrome, access to the agent AI experiments in Project Mariner, 30TB of cloud storage and YouTube Premium thrown in (why not?).

Google shows off Android XR

By Imad Khan

This is a new Android platform built in the Gemini era. This is for AI and VR applications, from VR headsets to AR glasses. Google has developed glasses with Samsung and optimized the AI tech with Qualcomm. Android apps will work on the Android XR platform. 

Samsung's Project Moohan is the first Android XR device. This is a VR headset that you wear on your head, teleporting users anywhere in the world. You can talk with Gemini about what you're seeing and have it pull up websites and other applications. Like the Apple Vision Pro, it'll be a computing platform where you can do work and also watch baseball. Project Moohan is coming later this year. 

AI Pro vs. AI Ultra

By James Martin
Google AI Pro vs Google AI Ultra

AI Pro vs. AI Ultra 

James Martin/CNET

Google AI Ultra Plan

By James Martin
Google AI Ultra plan

Google AI Ultra plan

James Martin/CNET

10 billion pieces of content have SynthID

By David Lumb
A man stands next to a stage display with the words "SynthID" on it.
James Martin/CNET

Google introduced its SynthID digital watermark to mark AI-generated content, and announced that 10 billion pieces of content have it (though it's anyone's guess how much unmarked content is out there). Google is expanding its partnerships to get more AI-generated content watermarked, as well as debuting a detector that can identify which content is artificially created, which is rolling out to early testers today.

Lyria 2 powers a Music AI Sandbox for you to play in

By Moe Long

Google wants its AI models to be used for creativity. Lyria 2 powers the company's Music AI Sandbox, which lets you create music using its generative AI. The company claimed that creations with its Music AI Sandbox were expressive and melodious. Lyria 2 can now used by enterprises, YouTube creators and musicians.

Flow, an image-to-clip generation tool, is launching today

By Imad Khan

Flow is a tool that lets you upload your images or AI-generated images via Imagen and begin assembling them into one AI-generated video. To create additional shots on a screen, you can click on a "+" icon to add another scene, along with a text prompt describing what you want. You can also add music from Lyria to create a full video clip. 

You can now natively generate audio in Veo 3

By Katie Collins

Google's AI video-generation tool has received an upgrade that allows you to generate audio natively within the tool. Veo 3 will let you generate sound effects, background sound and dialogue for your AI videos.

Veo 3 also comes with boosted video quality and a better understanding of physics. Want to play with it yourself? It's available today.

Imagen 4 won't fumble text as much

By Tyler Graham
google-i-o-25-keynote-2-24-40-screenshot.png
Google/Screenshot by CNET

We all know one of the telltale signs of spotting an AI-generated image is looking out for misspelled words (or fake, hallucinated letters that wind up in the mix). Well, that might get harder with Google's new Imagen model -- it allegedly doesn't make those mistakes anymore.

In addition to proper spelling and spacing for any text included in generated images, Imagen 4 will use the context of your prompt to creatively theme the font for the text it uses. In the keynote presentation, Imagen was used to create a flyer for a dinosaur-themed event. The text was not only completely legible, but it was created with a dinosaur bone font that added to the vibe of the prehistoric art.

Keep an eye on the fingers, folks. It's going to get harder to spot AI images through text alone.

Gemini App Stats

By James Martin
Gemini App Stats

Gemini App Stats

James Martin/CNET

Imagen 4

By David Lumb

Google has introduced Imagen 4, available today, which is the company's newest version of its image-generating model. Expect more detail, depth and autonomy for the model to make decisions about what's in the image (like, in Google's example case, a dinosaur music festival). And it's 10 times faster at generating images than Imagen 3. 

Veo 3 is Available Today

By James Martin
Google's Veo 3

Veo 3 is Available Today

James Martin/CNET

Gemini in Chrome

By James Martin
Gemini in Chrome broswer
James Martin/CNET

Google wants Gemini to be more personal, powerful and proactive

By Moe Long

Google aims to make its Gemini AI assistant more personal, transforming it into something that doesn't simply respond to queries, but rather, understands them. On stage at Google I/O, Google Labs VP Josh Woodward explained that the company wants Gemini to be personal, powerful and proactive. Some of the examples provided were connecting your Google search history to Gemini, so that it can see what you looked up recently -- like recipes -- to potentially provide more personalized recommendations. Even more unique, Google gave an example of Gemini being able to offer not only reminders to study, but quizzes based on the professor's lecture notes. Woodward said that Gemini could even make customized explainer videos on topics relevant to you. 

Gemini coming to Chrome

By Imad Khan

Google Gemini is coming to Chrome this week for subscribers. This is a new icon at the top of Chrome that can understand the context of your webpage, so you can ask questions and cross-reference information on the web. Read the full news story here

Gemini Live is coming to iOS and Android today

By Imad Khan

Gemini Live, the ability to real-time chat with Gemini and to capture camera footage for on-the-fly interaction, is coming to the Gemini iOS and Android app today. 

AI Mode's Shopping feature brings Pinterest-like tiles to search

By Tyler Graham
google-i-o-25-keynote-2-8-36-screenshot.png
Google/Screenshot by CNET

The new Shopping feature for AI Mode will generate images from over 50 billion shop listings online, combining them with other visual features you may have mentioned in your prompt so you can get an idea of what clothing will look like on your body, what furniture would look like in your room and more.

These images won't clog up Google Images, and they're completely tailored to your prompt. You can refine your prompt with more details to change the photo board and generate different listings from which to purchase items. This is like an on-demand Pinterest board that you can roll up on the go.

Online retailers will still house these shop listings, but you might not end up on their websites as often -- you can lodge your query, search for a product, and buy it without ever leaving AI Mode's shopping results page.

AI Mode application: Getting you sweet concert tickets

By David Lumb
A man stands in front of a stage presenting with a digital ticket interface on the screen.

One of Search's new AI Mode features is finding the right tickets for you, straight from the search bar.

James Martin/CNET

Now, I'll happily offload this task to Search's new AI Mode: buying tickets for concerts and events. Rather than sweating over dates, times, seat qualities, price and other factors, you can zero in on what you want right from the search bar. When you drop in a query, AI Mode will surface different sets of tickets so you can find what you want. 

Now, if only it would stand in a virtual line or leap in to get tickets before they sell out, I'd be really impressed.

Try On feature helps you try on clothes, before you buy

By Imad Khan

When shopping online for clothes, it can be hard to visualize how you might look. Try On is a new feature that takes an input image of yourself and uses AI to generate how the clothing will look on you. It does 3D shape understanding and converts your 2D photo into a 3D render. The AI considers how the fabric will stretch and shape to your body. Once you've found a dress you'd like within Google Search, it can help you buy it. For example, if the price is more than you want to spend, you can track the price and will be notified when the price drops to what you want to pay. The AI agent can automatically buy it for you as well with Google Pay. 

Try On is available today in AI Labs.

Google's Try It On

By James Martin
Google Try On

Google's Try It On personalized clothing experience

James Martin/CNET

Google's Search Live provides real-time information

By Moe Long

Google's Search Live showcased different scenarios using AI Mode to give real-time feedback, like identifying a ripe strawberry to pluck from the vine while picking strawberries. In another demo, a user asked what science experiment they were making with hydrogen peroxide, toothpaste and yeast. Search Live correctly said elephant toothpaste. The examples used were from Google's team members helping their team members take on new challenges, like science experiments, likely in an attempt to highlight Search Live's real-world applications.

Search Live

By James Martin
Google Search Live
James Martin/CNET

Project Astra live capabilities come to AI Mode

By Imad Khan

This feature is called Search Live, which means that as you use your camera, Google Search can see what you see. While the camera is on, you can start asking questions, and AI Mode can respond to you in real time. 

AI Overviews are apparently making us 'happier'

By Katie Collins

How do you feel about Google's AI Overviews? I'll bet you have opinions, because I know I do.

"As people use AI Overviews, we see they are happier with their results and they search more," said Pichai. It's a bold claim, and I don't know how Google is measuring this, but it doesn't reflect how I feel personally.

For those of us who create the content that Google serves up in search, AI Overviews pulls from and summarizes our work in ways that don't necessarily reflect the entirety of what we've written. But even as a user of Google Search, I'd prefer to choose my own primary source from a list of options rather than have Google decide for me, and then present me with an algorithmically constructed summary.

AI Mode is like Google Search on steroids

By Imad Khan

Google reimagines what Google Search might look like with AI Mode and Gemini 2.5 at its core. AI Mode is another tab in Google Search. Here, if you search for an art exhibit in town, AI Mode will not only pull up results like a Google Search, but also fill it with Google Maps imagery. AI Mode also has personalization, knowing your email and other history to give you a personalized experience. 

AI Mode also has a deep search to help you create expert and fully cited reports. 

Google's Rajan Patel takes the stage to talk about how he used AI mode to look up sports statistics, mainly about torpedo bats in baseball and how they are affecting the game. Here, the Search cross-references which players are using torpedo bats and how they performed last year without them. It also considers that right now, it's early in the season. It can generate graphs dynamically. Here, he illustrates how AI Mode can make each query make the most sense. 

Gemini Diffusion turns static into text 5× faster

By Nelson Aguilar

A diffusion model adds noise to an image, until it's pure static, and then removes all that noise. Because it can tweak the draft at every step, the model catches and fixes errors on the fly. Google's new Gemini Diffusion uses this trick to write and edit text, like code or math problems, about five times faster than its previous quickest model, while maintaining the accuracy.

Flow, Imagen 4 and Veo 3 supercharge creative AI

By Katelyn Chedraoui

Google just dropped three new creative AI models. Flow is the next step in Google's AI video journey, named after the phase of the creative process where everything is in flow. It's powered by the next generation of Google's AI image and video models, Imagen 4 and Veo 3. Notable among the updates is Veo's new ability to natively generate and sync audio to its AI video clips. That's a first for Google, and it's something that will distinguish it from competitors like OpenAI's Sora and Adobe's Firefly. These are all rolling out for US users in the Gemini app starting today, but you'll need to pay up with the new premium Ultra subscription tier.

If Project Astra's Visual Interpreter features could help you -- sign up for the waitlist

By Tyler Graham
google-i-o-25-keynote-1-50-51-screenshot.png
Google/Screenshot by CNET

Google's Project Astra has experimental Visual Interpreter features that are designed to help people with visual impairments navigate their immediate surroundings. In the showcase, a musician named Dorsey P. used the AI tool to move around the venues that he was playing it, showing off how Project Astra can quite literally read the room.

If this is a tool that could help you out, the bad news is that you can't open it and get started right away. But you can put yourself on a waiting list to access the new experimental feature by navigating to the Google Form here.

AI Mode in Search

By James Martin
AI Mode in Google Search

AI Mode in Search

James Martin/CNET

AI Mode comes to all users today

By Imad Khan

Pichai says AI Overviews has 1.5 billion users and is driving 10% growth in certain types of queries. Pichai says it's one of Google's most successful launches (despite it getting things wrong). People are asking more complex queries as a result. Earlier this year, Google announced AI Mode, a new section at the top of Google Search that's a blend of Gemini and Google Search. It allows you to go in and ask follow-up questions. AI Overviews is now available for all users in the US today. Gemini 2.5 is also coming to Search. 

Gemini AI

By James Martin
Gemini AI
James Martin/CNET

Gemini can make robots do our bidding

By Katie Collins

Thanks to Gemini's improved understanding of the physical world, Google has refined it to create a specialized model that can be applied to robots. It's called -- you guessed it -- Gemini Robotics.

The model teaches robots to perform tasks "like grasp, follow instructions and adjust to novel tasks on the fly," said Hassabis.

Those lucky enough to be at I/O in person today will get to play with the robots in demos following the keynote, which suddenly makes me wish I were there!

Gemini Action Intelligence in action: Fixing a bike

By David Lumb

OK, all together now: Google's presentation gave a little demo simulating how Gemini could help with a task, which is essentially a dozen requests strung together, each building on the last. In the scenario, a man must fix a bike (we've all been there) and asks Gemini for instructions (it brings up a YouTube video), to hunt down a tool (it calls local shops until it finds one), to hold on while chatting with a friend about lunch, and even to give its opinion on dog fashion. 

With today's AI, that's a jumble of barely connected (and often non-sequitur) questions and requests strung together. But it reflects how we humans go about our day: disjointed as heck. Here's hoping it's as easy to use Gemini's Action Intelligence as Google demonstrated.

Google DeepMind Co-Founder Demis Hassabis

By James Martin
Demis Hassabis

Google DeepMind Co-Founder Demis Hassabis

James Martin/CNET

World Models means understanding the world

By Imad Khan

Hassabis says world AI models can understand the world. This means that these AI models can simulate a 3D environment with a basic image. Or, it can understand fluids and wind dynamics. With Google's Veo video-generation tool, Hassabis says you can give it wild prompts, and even fanciful ideas will still simulate accurately as if they were in the real world. 

Gemini 2.5 Pro Deep Think is on its way

By Imad Khan

Gemini 2.5 Deep Think is a new AI research model from DeepMind. It's leading in multiple benchmarks and will only be available to trusted testers at first, according to Hassabis. 

Here's everything you can build with Gemini right now

By Tyler Graham
google-i-o-25-keynote-1-27-10-screenshot.png
Google/Screenshot by CNET

Much of today's keynote so far has been noting how much progress the Google Gemini large language model has made over the past 18 months. One of the best ways to concretely show that off to a layperson is to show them what they can use the model to build.

Google showed off 30 different things you can create with Gemini right now. The model can pull together visual elements to simulate physics, nature and light. The AI can finish your existing drawings based on prompts you enter. It can create complex three-dimensional puzzles and turn images or vocal commands into lines of code.

Your drawings can be made into models, even 3D printable ones. If you don't have time to scan documents or hours of video, Gemini can do that for you.

Google wants to let you animate, visualize and create anything you can think of through Gemini -- and it's continuing to showcase new features during this keynote that aim to let you do just that.

Gemini 2.5 Flash is more efficient, using fewer tokens

By Moe Long

According to Google, Gemini 2.5 Flash is more efficient, and the company says it can use fewer tokens for the same performance -- to the tune of a claimed 22% efficiency gain. If you're building with Gemini 2.5 Flash, you may be able to create using fewer tokens. 

Gemini 2.5 Pro can soon do live translation

By Imad Khan

DeepMind's Tulsee Doshi takes the stage to show off a demo of live translation where an AI voice can speak in English and switch to Hindi in midsentence. It can sound more human-like and even whisper, which is kind of creepy. Doshi also says that Gemini 2.5 Pro is more secure, with protections against prompt injection, which is how people use prompts to trick the AI into doing what it isn't meant to do. 

More people are using AI

By Nelson Aguilar
screenshot-2025-05-20-at-10-18-30am.png
Nelson Aguilar/CNET

In just a year, Google's systems have exploded from processing 9.7 trillion tokens a month to roughly 480 trillion -- a 50x increase.

Google said AI... how many times now?

By David Lumb

A couple years ago at Google I/O 2023, we joked about how CEO Sundar Pichai and his fellow presenters said "AI" 140 times. Now, we haven't even bothered to keep track. It's no longer even Google AI/O. AI is here. AI is always. It's the age of GeminAI.

Google Meet is getting real-time translation functionality

By Tyler Graham
google-i-o-25-keynote-1-15-8-screenshot.png
Google/Screenshot by CNET

Google's video-chatting software is getting some Project Starline feature integration -- and that includes real-time speech translation between speakers on a call.

Anyone using this new feature will be able to choose the language they're speaking and the language they prefer to hear, and the AI tools will interpret these languages and translate for you as you speak (and listen to the other person) on the call.

During Google's presentation, the delay between a speaker beginning to talk and the translation converting that speech into another language was impressively short -- but the scenario was very formal and didn't include much slang, so it's possible that could delay the model longer.

English and Spanish translations are available on Google Meet right now for subscribers. Google CEO Sundar Pichai promised that more languages would be rolling out over the next few weeks.

Demis Hassabis takes the stage

By Imad Khan

Demis Hassabis, CEO of Google DeepMind, the company's AI research arm in London, takes the stage. He's highlighting how users are using Gemini 2.5 Pro and how the model is topping coding benchmarks in LMArena. Gemini 2.5 Flash, the company's lighter AI model, is coming in early June, with Pro following soon after. Hassabis says a preview is available now for some users. 

Google says Gemini is getting smarter -- and better at telling us we're wrong

By Moe Long

Google's claims that its Gemini AI assistant is getting smarter, as well as better at telling us when we're wrong. In a demo, a video showed a person calling asking about a convertible, and Gemini corrected that is was a garbage truck. The examples were pretty exaggerated -- like calling a clearly tall palm tree "short" -- so it remains to be seen how adept Gemini is at telling us we're wrong with less seemingly obvious examples.

Personal context better learns who you are

By Imad Khan

With your permission, Gemini can learn more about you. Gemini can read your past notes and emails to help write your emails to better match your tone and style and automatically generate replies. It'll come to Gmail this summer for subscribers. 

Gemini AI Agent Mode

By James Martin
Gemini AI Agent mode
James Martin/CNET

Project Starline is now Google Beam

By David Lumb

Google's Project Starline -- its 3D stereoscopic chat platform that uses multiple cameras to add depth to your big-screen video conversations -- has been rebranded as Google Beam. And yes, that means it's close to being released, and will be coming soon in partnership with HP.

Google is in its 'Gemini era'

By Tyler Graham
google-i-o-25-keynote-1-8-48-screenshot.png
Google/Screenshot by CNET

After Google I/O opened with an AI-generated video featuring altogether too many capybaras, Google CEO Sundar Pichai opened the keynote with a resonant message: The company is "in our Gemini era."

Pichai then shared some of the milestones that Gemini has achieved over the last year. Gemini 2.5 Pro beats out every competitor in the metrics measured by LMArena, and the multimodal large language model has reached the top spot on WebDev Arena. The AI model produces hundreds of thousands of lines of accepted code on Cursor every single minute, and it has even beaten Pokemon Blue.

Gemini is processing 480 trillion monthly tokens -- it's being used by more developers and laypeople than ever before.

Google Beam

By James Martin
Google Beam
Google/Screenshot by CNET

Pichai shows off the numbers

By Imad Khan

Pichai takes a few moments to highlight some of the metrics Google's AI tools have hit. He points out how its new Ironwood AI chip can do more processing for less. He adds that Google is now processing over 480 trillion tokens. Tokens are chunks of words that AI systems use to parse data. Gemini adoption is also rapidly increasing. It's seeing 400 million active monthly users, which matches ChatGPT. 

Google AI overviews are on the rise

By Moe Long

Chances are, you've noticed an uptick in Google AI overviews in Search. Its AI overviews have over 1.5 billion users per month. 

Massive growth in AI

By James Martin
chart showing monthly tokens processed going up fast
Google/Screenshot by CNET

Artificial Pokemon Intelligence?

By Nelson Aguilar
screenshot-2025-05-20-at-10-07-40am.png
Nelson Aguilar/CNET

Gemini has beat Pokemon Blue, a feat that Sundar jokingly brands "Artificial Pokemon Intelligence."

Sundar Pichai at I/O

By James Martin
google-io-2025-9258
James Martin/CNET

Google I/O has started, and as expected, AI is taking the spotlight

By Moe Long

Google I/O 2025 has begun, and as predicted, AI is taking center stage. A lot of the clips featured small text at the bottom of the screen, saying they were created with Veo -- Google's AI video generator -- and Imagen, its image generator. 

Sundar Pichai enters the stage behind AI-generated video

By Imad Khan

Google CEO Sundar Pichai has taken the stage. Leading the show was an AI-generated sizzle reel, showing people, animals and balloons. Already, Pichai is highlighting Google's quick AI advancements and product releases. 

Here we go -- Google I/O is starting!

By David Lumb

Lights, camera, AIction (OK, I'll stop with the AI puns). Let's go!

You can use the same tools Toro y Moi just used right now

By Tyler Graham
google-i-o-25-keynote-54-52-screenshot.png
Google/Screenshot by CNET

Did you enjoy Toro y Moi's Google I/O DJ set? It was created using Google DeepMind AI tools that you can try out right now. Using Lyria RealTime, the disc jockey was able to create layered instrumental tracks by turning a few knobs -- and he would routinely add his own vocals into the mix.

Google's Lyria RealTime tool is available for you to try out right here, although you unfortunately won't have the soothing lo-fi visual dreamscapes floating around behind you like Chaz did on stage.

5-minute warning!

By David Lumb

We're being told to take our seats because Google's extravaganza is about to begin. Get ready for whatever Google's got in store.

Luminar Mobile photo-editing app now available for Android and ChromeOS

By Jeff Carlson

Coinciding with Google I/O, Skylum today released an Android version of its photo-editing app Luminar Mobile for use on phones, tablets and ChromeOS devices. The app debuted on iOS last year with a novel interface and AI-based tools for quickly enhancing and fixing photos.

Luminar Mobile app on an Android tablet. A photo of a man with his arm against the wall is being edited to adjust the lighting in the scene.

Luminar Mobile is now available on Android and ChromeOS devices.

Skylum

Skylum worked with Google to ensure that the Luminar Mobile interface works across a wide range of screen sizes and form factors on Android devices. For foldable phones, that meant adapting the interface for various arrangements, such as "tabletop" mode, where the phone is half opened with the main screen at a 90-degree angle.

In an interview with CNET, Kostyantyn Tymoschuk, vice president of growth at Skylum, said, "One of the main goals was to deliver this unique user experience by releasing this application across all large screens, starting from just a regular smartphone, following up with foldable devices and also Chromebooks." He noted that "our interfaces will adapt smoothly and transform to the form factor [customers] will be actually using."

According to Skylum, an Android version of Luminar Mobile has been in high demand from customers of the Windows version of Luminar Neo, who comprise the majority of desktop users.

Luminar Mobile for Android and ChromeOS is available now for devices running Android 11 and later with at least 4GB of RAM. It's priced on a subscription model: $5 a month, $30 a year, or a $60 lifetime license.

'Are you wearing the...'

By David Lumb
A man wears a black quilted vest as he fiddles and twists lots of knobs on his mixing board next to a keyboard.
James Martin/CNET

"...Toro y Moi Google vest? Yeah, I am."

Toro y Moi takes the stage to AI DJ

By Imad Khan

Singer-songwriter Toro y Moi is on the main stage, serenading incoming audience members with a dreamy Indian-inspired set. He's using AI tools to bring in other instruments and generate music on the fly, while also mixing in his own instrumentation and vocals. Behind him are AI-generated dreamscapes of instruments and other equipment, highlighting what he's playing. It's definitely a departure from the hyper energy of Marc Rebillet, who performed at last year's I/O. 

Speaking to the crowd at Shoreline Amphitheatre, Toro y Moi said, "Personally, music is my spiritual guide. Music is going towards AI with or without me, and it's my responsibility as an artist to keep up."

CNET's experts are on the ground at Google I/O right now

By Tyler Graham
img-7050
James Martin/CNET

Some of CNET's best and brightest mobile reporters, AI reporters and photojournalists are getting situated in Mountain View, California, to provide on-the-ground reporting for Google I/O. Managing editor Patrick Holland and senior reporters David LumbAbrar Al-Heeti and Imad Khan are some of the writers who will be providing expert analysis of the Google I/O reveals.

If you want to follow along with the event virtually, you can tune in to the keynote livestream here. We'll be providing expert analysis and industry takes on CNET as the event unfolds, and our team on the ground will be pulling together information you won't find elsewhere.

Toro y Moi collaborating with AI

By James Martin
Google IO 20205

Toro y Moi collaborating with AI making pre-show music

James Martin/CNET

Google I/O reporting -- powered by Creme?

By David Lumb
A man holds up an iced coffee drink in front of a stage.
David Lumb/CNET

At Shoreline Amphitheatre, your stoic CNET crew is settled into their seats, including CNET Managing Editor Patrick Holland -- and at this point in the morning, after fighting our way through transit and long lines to get in, we're entirely powered by drinks and snacks provided by Google. Vegan breakfast burritos, English muffins, lox and bagels and, of course, iced coffee. Cheers! 

What does it mean to have Gemini in Wear OS?

By Jeff Carlson
android-event-2025
Google/CNET

At last week's Android Show, Google unveiled that Wear OS 6 will include Gemini, which sounds like a natural extension of the AI features available on Android phones. But the relationship between watch and phone could be tighter than it first sounds.

AI features are processor-intensive -- that's why the latest generation of phones from the major manufacturers all tout their models as being designed for AI. To get better performance, a phone will do all or most of the computation locally; whenever it needs to pass along some of that work to the cloud for processing, it's introducing some lag into the operation.

But watches don't include that kind of computational horsepower. They're even more sensitive about reducing battery-draining features -- such as AI processing. So where's the tradeoff? In her article about Wear OS 6, Vanessa Hand Orellana breaks down how the interaction will work when Wear OS 6 is released later in the year: "Gemini on Wear OS processes requests in the cloud, which means it still relies on an internet connection (via phone, Wi-Fi or LTE), but the idea is that you'll be able to get more done without pulling out your phone."

From what we know so far, it sounds like the watch will be more of a Gemini interface, sort of like ordering fast food via a speaker at a drive-through versus at the take-out window itself; you'll make a request and get the result when it's ready. As wearable processors improve, perhaps that balance will shift more to on-device processing.

Google killed the Pixel Tablet too early, just as its AI ambitions are taking off

By Imad Khan
pixel-tablet-review-6
Scott Stein/CNET

In the sea of dead Google products, one abandonment that keeps puzzling me is the Pixel Tablet. Introduced in 2023, it was Google's re-entrance into the tablet category after four years. It was part tablet and part smart home device, being able to move between its magnetic dock and your couch. Instead of bringing out a Pixel Tablet 2 or a larger variant, Google pretty much let it die a quiet death

It's unfortunate because, while not a perfect product, its hybrid approach was a nice innovation. Not only that, Google's total embrace of AI means that a Pixel Tablet 2 could have been the center of your AI-powered home, helping you control various Matter-powered devices with Gemini. Especially as Amazon's Alexa is still trying to figure out what to do with Panos Panay at the helm now. While it's unlikely we'll see many hardware announcements during I/O this year, don't expect a Pixel Tablet 2.

Is Google about to reinvent search with AI mode for the masses?

By Nelson Aguilar
Google AI Mode
Google

Google may be ready to flip the switch and make AI mode an option for everyone in Search, complete with a permanent button on the Google homepage.

Introduced in March as a Google Labs experiment, AI Mode is pretty much AI Overviews on steroids -- it's a Gemini-powered search feature for your more complicated questions, complete with follow-up prompts and useful web links when you want to dig deeper. Google recently announced that it's rolling out to a small number of users who sign up for the waiting list.

And we might get "Live for AI Mode," a feature discovered in a recent Google app beta by 9to5Google, that lets you speak conversationally with Google, for a more natural interaction with AI Mode.

As with most Google rollouts, US English users would get first dibs, with more languages and regions to follow.

How to watch Google I/O 2025

By Moe Long
Every AI Thumbnail
Google/Screenshot by CNET

Google I/O -- the company's annual developer conference -- takes place on May 20 and 21 in Mountain View, California. The Google keynote, which will feature major product (software and hardware) announcements, happens at 10 a.m. PT (1 p.m. ET, 6 p.m. BST).

You can watch the keynote via a livestream, plus check out the full slate of events, including the Google I/O developer keynote, which is scheduled for 1:30 p.m. PT (4:30 p.m. ET, 9:30 p.m. BST).

A new tech event demands the question: Which celebrities will appear?

By Jeff Carlson
Animated Android version of basketball player A'ja Wilson lifting weights.

For its Android Show, Google made animated Android representations of several celebrities, including A'ja Wilson.

Screenshot by Jeff Carlson/CNET

Google I/O is a tech event with a focus on the technologies that developers will be bringing to market for the rest of the year -- so of course it's time to talk about celebrity appearances.

Like Apple's WWDC and Samsung's Unpacked, Google I/O is an event in the sense that Google is also trying to catch the attention of regular people who are choosing where to spend their money. Those folks might not know the difference between Gemini Pro and Gemini Flash or how many tokens are allotted to each one, but they probably recognized multihyphenate Donald Glover when he did a video for generative video project Veo, or a smiling actress/producer Sydney Sweeney in the audience when Samsung created a generated 3D image of her on stage. (I can only imagine how much she was paid for less than a minute's appearance, though she is a global ambassador for Samsung's Galaxy Z Flip series phones.)

Last week's brief Android Show was packed with celebrity appearances -- if you recognized their names. Instead of live appearances, several sports figures appeared during the transitions between sections of the presentation as animated Android mascots: Formula One racing drivers Lando Norris and Oscar Piastri, soccer star Trinity Rodman, and basketball players Giannis Antetokounmpo, A'ja Wilson and Jimmy Butler.

So who might we see this week? Pedro Pascal seems to be everywhere right now, but I'm not sure even he could jet back from Cannes so quickly. After Samsung previewed the Project Moohan headset running Android XR, I can imagine actor David Corenswet showing up to tie his Man of Steel super-peepers to the upcoming Superman movie.

Where art thou, Project Starline?

By David Lumb
A man sits at a desk in front of a Project Starline TV display, holding his hand out several feet from the screen to "catch" an apple held out by his conversation partner.

Google's Project Starline prototype allows participants to have a remote conversation with depth effects -- though difficult to see from this angle, the seated reporter is holding his hand out below where the other person is "holding" the apple.

David Lumb/CNET

It's been awhile since we've heard anything about Project Starline, Google's experimental 3D chat system that adds depth to video calls so it feels like your conversation partner's really there -- so long as you both have Starline's extensive six-camera setup configured. I got to experience it at Code Conference 2023, and my colleagues have checked in with its progress

While it's come a long way since Google first debuted the tech in 2021, collapsing it from a booth-sized interface to half a dozen cameras ringing a big display, I'm waiting to see how small it can get -- especially after HP just announced it will start releasing commercialized versions of Starline later in 2025.

What's next in the 'Gemini Era'?

By David Lumb
Google Pixel 9A phone with a cat photo on the screen

Gemini Live can now see, and it's wild.

James Martin/CNET

Last year at Google I/O 2024, CEO Sundar Pichai said that we had entered the company's "Gemini era" -- and it sure has seemed like the company is working its AI chatbot into more and more software and services. But it's also refined its AI centerpiece, releasing Gemini 2.0 with new versions of its various generative AI models back in February in the wake of increased competition from China's DeepSeek. So we wonder: Will Google I/O 2025 see more breakneck advancement of Gemini's capabilities, or more new ways to plug it into devices for intriguing applications -- like the Gemini Live camera mode that started rolling out to Pixel phones this month?

Android will make an appearance, but not in the way you think

By Patrick Holland
A gleaming mixed reality headset and Android XR label with a sparkling background
Google/Getty Images/CNET

Last week, Google held the Android Show: I/O Edition. The virtual event showed off a number of Android 16 new features. The new OS has three main pillars: a significant UI overhaul called Material 3 Expressive; support for Gemini on more devices; and safety and security tools. And this just doesn't apply to phones. Wear OS will get Material 3 Expressive additions, and TVs, cars, Android XR and Wear OS also get integrated Gemini support.

So with all these announcements, what more can Google possibly tell us about Android? Well, expect to hear a lot about Android XR and mixed reality. Earlier this year, we got to see (and not touch) Samsung's Project Moohan headset (the first Android XR headset). Luckily, CNET's Scott Stein got hands-on time (and on-head time) with Samsung's headset back in December. But there are still a lot of unknowns both around Android XR and the headset. Hopefully, we'll learn more at I/O.

More AI that's truly practical please, Google!

By Patrick Holland
Google Pixel 9A phone with a cat photo on the screen

Gemini Live can now see, and it's wild.

James Martin/CNET

AI can do a lot. It can rewrite my text messages to sound like Shakespeare (zounds!), it can transcribe and give me notes from a meeting, and remove unwanted objects (and people) from my photos. But we need more practical examples of how AI is useful. Announced at last year's I/O as Project Astra, one of the most compelling tech to come from AI is Gemini Live. It can offer me more information about what's on my screen or what my camera sees. And this is just one of the many flavors of Gemini that Google has. I'd be happy if this year's I/O Google only showed real-world examples of how helpful its AI can be and not announce a single new thing. Stop showing the possibilities and show the reality.