
Google announced several new Gemini Intelligence-branded AI features during its Android Show: I/O Edition event on Tuesday, expanding Gemini’s role across Android devices with tools that can complete tasks across apps, browse the web, fill out forms, transcribe speech, and generate Android widgets through natural language prompts.
The updates continue Google’s effort to integrate Gemini more deeply into Android workflows and device interactions. The company said the features will first launch on the latest Samsung Galaxy and Google Pixel devices this summer before expanding to other Android devices later in the year.
Gemini Expands App And Task Automation
Google had previously introduced early agentic AI capabilities for Gemini during the launch of the Samsung Galaxy S26 earlier this year. At the time, the company demonstrated features such as ordering food, booking rides, reserving a front-row bike for a spin class, locating a class syllabus in Gmail, and searching for related books.
The latest update adds broader multistep task execution across Android apps.
Google said users will be able to activate Gemini by pressing the phone’s power button and describing a task verbally. Gemini will then use information currently displayed on the screen as contextual input while carrying out the request.
One example shared by the company involved copying a grocery list from a notes application and automatically adding the items into a shopping app’s cart. Google noted that Gemini will pause before completing purchases and wait for the user’s final confirmation before checkout.
Web Browsing And Chrome Integration
Google also confirmed that a Gemini auto-browsing feature first introduced experimentally in January is now coming to Android devices.
The feature allows Gemini to browse websites and complete online tasks such as booking appointments on behalf of users.
In late June, Android devices will additionally receive Gemini integration in Google Chrome. The feature will allow users to summarize web pages or ask questions about on-screen content directly within the browser, similar to Gemini’s existing desktop Chrome functionality.
Personal Intelligence And Gboard Features
Google also announced new Personal Intelligence capabilities tied to form completion.
The company said Gemini will learn details about users to help automatically fill out forms. Google stated that the feature is optional and can be disabled through settings at any time.
Gemini is also being integrated into Android’s Gboard keyboard through a feature called Rambler.
According to Google, Rambler uses Gemini’s multimodal AI capabilities to let users dictate speech in their own tone while automatically transcribing and formatting the text. The system can also remove filler words during transcription.
Google compared the feature to existing AI-powered dictation tools already available in the market.
AI-Generated Android Widgets
Google is also introducing a feature that allows users to create Android widgets through text prompts, reflecting growing interest in AI-assisted coding tools.
Users can describe the type of widget they want in natural language, and Gemini will generate it automatically. Google provided an example where a user could create a meal-planning widget using a request such as: “Suggest three high-protein meal prep recipes every week.”
The company acknowledged that AI-generated widget creation is not entirely new. Hardware startup Nothing released a similar tool last year.
Google said Gemini Intelligence features will follow the company’s Material 3 expressive design language across Android interfaces and experiences.
Featured image credits: Flickr
For more stories like it, click the +Follow button at the top of this page to follow us.
