Google has outlined a broader AI-focused direction for Android, positioning the platform as an “intelligence system” designed to automate tasks, anticipate user needs, and extend app experiences across multiple device categories including phones, cars, watches, XR headsets, and foldables.
The announcement, made during The Android Show ahead of Google I/O 2026, introduces a new set of Gemini-powered capabilities aimed at integrating AI agents more deeply into Android’s operating system and app ecosystem.
At the center of the update is “Gemini Intelligence,” a suite of features that allows Gemini to perform multi-step actions across supported apps. Google said the system will initially support selected partners in categories such as food delivery and ride-sharing, enabling tasks like ordering groceries, requesting rides, or completing purchases through conversational prompts.
The company stated that the framework is designed to drive high-intent engagement with apps while reducing the need for developers to build standalone automation systems. Gemini handles navigation and task execution directly within the Android environment while maintaining user visibility and approval controls.
Google is also expanding Android AppFunctions, an API framework that enables developers to expose app services, actions, and data directly to AI agents and the operating system through natural language interfaces. The company said early-stage testing is already underway with selected apps including KakaoTalk for actions such as sending messages or initiating calls.
According to Google, AppFunctions currently supports local execution capabilities across 25 app use cases on multiple Android device manufacturers. Developers can experiment with the APIs locally and apply for early access integration programs.
Alongside AI automation, Google announced broader widget and interface updates designed for adaptive experiences across form factors. Widget support is expanding to Android Auto-compatible vehicles, which Google said now exceed 250 million supported cars globally.
Jetpack Glance and the new RemoteCompose framework will introduce additional widget interaction features including snap scrolling, enhanced buttons, adaptive resizing, and particle effects. Google also revealed a “Create My Widget” feature powered by Gemini that can generate custom widgets dynamically based on user requests.
The company further detailed updates to its adaptive app development tools, including Jetpack Navigation 3 and upcoming Compose 1.11 features focused on responsive layouts, flexible UI systems, and multi-screen optimization.
Google is also increasing its focus on XR and wearable experiences through updates to the Android XR SDK. The platform will support upcoming XR devices, including connected smart glasses such as XREAL’s Project Aura, while enabling adaptive Android apps to appear automatically within immersive environments.
Additional updates include expanded support for video apps in Android Automotive OS, improved testing frameworks for XR developers, and new projected display APIs that connect smartphone interfaces to wearable displays and glasses.
Google said Gemini Intelligence features will begin rolling out this summer on selected Samsung Galaxy and Google Pixel devices before expanding later this year to watches, cars, glasses, tablets, and laptops running Android.


Comments
Loading…