Google only wrapped its big key note in I/O 2025. As expected, it was full of AI -related announcements, which included Google’s image and video generation models from fresh works to search and new features in Gmail.
But there was some surprise, such as a new AI filmmaking app and the update of the project star line. If you don’t catch the event directly, you can check everything in the below roundup.
Google has announced that it is developing a new tab, a new tab that allows you to find the web using the company’s Gemini AI chat boot, this week to all US users.
Google will test new features in the AI format this summer, such as a way of developing a chart for deep search and finance and sports questions. It is also developing the ability to shopping in AI mode in “coming months”.
The Project Star Line, which started as a 3D video chat booth, is taking a big step forward. It is becoming a Google Beam and will soon launch inside a HP-Branded device that will have light field display and six cameras to create a 3D image of the person with whom you are chatting on a video call.
Companies like Delivet, Dollingo, and Sales Force have already said that they will add HP Google Beam devices to their offices.
Google has announced the latest version of the AI text to image generator, which the company says is better at producing the text and offers the ability to export images in more shapes like square and landscape. Its next general AI video generator, VEO 3, will allow you to make video and sound together, while VEO 2 now comes with tools such as the camera control and removal of the object.
In addition to updating its AI models, a new AI filmmaking app called Google Flu is launching. The device uses VEOs, Imagen and Gemini to create eight seconds of AI generated video clips based on text promotions and / or images. This scenario also comes with builder tools to sew the clips together and create long AI videos.
Gemini 2.5 Pro has increased the mode of a “better” reasoning
Experimental deep think mode aims for complicated questions related to math and coding. It is capable of considering “numerous assumptions” before responding and will be available only for reliable testers only.
Google has also made its Gemini 2.5 flash model available to everyone on its Gemini app and is improving the cost -effective model costing Google AI studio before a wide rollout.
Zariel and Google Project are making teams on Aura, a new pair of smart glasses that uses the Android XR platform for mixed reality devices. We do not know much about glasses yet, but they will come with Gemini integration and a large field of view, as well as the camera and microphone built -in.
Google is also contributing with Samsung, soft monster, and Warby Parker to produce other Android XR smart glasses.
The Project Austral can use your phone’s camera to “look” around your surroundings, but the latest prototype will allow it to complete your work, even if you clearly do not ask it. The model can choose to speak based on what is seeing, such as indicating an error in your homework.
Google is building its AI assistant in Chrome. Starting on May 21, Google AI Pro and Ultra users will be able to select the Gemini button in Chrome to describe or summarize the information in the web pages and visit the sites by them. This feature can work with two tabs for now, but Google plans to add more cooperation later this year.
Google is producing a new “AI ultra” subscription that offers the company’s latest access to access to the latest AI model and apps such as Gemini, Notebook LM, Flu and more. The purchase also includes early access to Gemini in Chrome and Project Marine, which can now complete 10 tasks simultaneously.
Speaking of the Project Austra, Google is launching a search live, a feature that includes the capabilities of AI Assistant. By selecting the new “direct” icon in the AI format or lens, you can talk back and forth with the search, showing what is in your camera.
After making the Gemini live screen sharing feature free for all Android users last month, Google has announced that iOS users will also be able to access it for free.
Google has revealed the sewing, which is a new AI -powered tool that can create an interface using selected themes and detail. You can also add screenshots of wire frames, rough sketches, and other UI designs to guide the sewing output. This experiment is currently available at Google Labs.
Google Matt is launching a new feature that translates your speech into the close time of your conversation partner into a preferred language. This feature only supports English and Spanish for now. This son is coming to Google AI Pro and Ultra users.
Gmail’s smart response feature, which uses AI to recommend answers to your emails, will now use information from your inbox and Google Drive for a pre -response response that seems more like you. For example, this feature will also take into account your recipient’s accent, which will allow you to suggest more formal response to your boss.
Gmail upgraded smart answers will be available in English, iOS and Android in English when launching through Google Labs.
Google is testing a new feature that lets you see your youself uploading a whole length of a picture of how shirts, pants, clothes, or skirts can be seen on you. It uses an AI model that “understands the nuances of the human body and clothing.”
Google will soon use you in the AI format, as well as an “Agent Checkout” feature that can buy the product by you.
If Chrome finds out that your password is compromised, Google says the browser will soon be able to “create a strong alternative” and automatically update it on auxiliary websites. This feature is launched later this year, and Google says it will always demand consent before changing your passwords.