Google is previewing a new Gemini AI model designed to navigate and interact with the web via a browser, letting AI agents do things inside interfaces designed for use by people and not robots. The model, called Gemini 2.5 Computer Use, uses “visual understanding and reasoning capabilities” to analyze a user’s request and carry out a task, such as filling out and submitting a form.
It can be used for UI testing or navigating interfaces made for people who don’t have an API or other direct connection available. Other versions of this model have been used for agentic features in AI Mode and Project Mariner, a research prototype that uses AI agents to carry out tasks on its own in a browser, like adding items to your cart based on a list of ingredients.
Google’s announcement comes just one