Google's AI Model Gemini Expands Across Gmail, YouTube, and Android
Summary:
Google's artificial intelligence model, Gemini, is set to feature extensively across all of Google's offerings, including YouTube, Gmail, and Android smartphones. Announced at the I/O 2024 developer conference, Gemini will interact more contextually with applications, help draft, summarize and search emails in Gmail, and go beyond text to handle image, audio, and video inputs. The AI model will also replace Google Assistant on Android and will enhance Google Maps with AI-generated summaries of locations.
Gemini, Google's artificial intelligence (AI) model, is becoming an integral part of the company's technological offerings, with plans to introduce it on Gmail, YouTube and the corporation's smartphone line. During the I/O 2024 developer conference on May 14, Sundar Pichai, Google's CEO, highlighted where the AI model will be implemented in the future in his keynote speech, which was dominated by the topic of AI and specifically Gemini, launched in December.
Google plans to embed the Large Language Model (LLM) in almost all its services, including Android, Search, and Gmail, with new experiences for users on the horizon.
In the future, Gemini will gain the ability to interact with applications, gaining more contextual awareness. Updates will allow users to summon Gemini to interact with apps, such as incorporating an AI-generated image into a message. YouTube viewers will get to utilize an "Ask this video" function whereby they use the AI to extract specific information from the video.
Gmail, Google's email platform will also see Gemini integration, allowing users to search, summarise, and draft emails. The AI assistant will also facilitate more complex tasks, like aiding in the management of e-commerce returns by searching the user's inbox, locating the receipt and filling out online forms.
Google also introduced Gemini Live, a new feature enabling users to engage in detailed voice chats with the AI on their smartphones. Users can interrupt the conversation at any time to seek clarification and Gemini can adapt to a user's conversational patterns in real time. Moreover, it will have the ability to respond to visual cues by examining any photos or videos captured on the device.
Google is also working on the development of intelligent AI agents that are able to reason, plan, and execute complicated multi-step tasks under user supervision. The ambitious project also includes handling input taken from images, audio, and videos, for example, automating shopping returns and planning excursions in a new city.
Google's AI model is developing at a rapid pace, with future plans to replace Google Assistant on Android entirely with Gemini. New features are in the pipeline, including an "Ask Photos" function I which enables users to search their photo library using Gemini, which can understand and recognize context, objects, and people, and summarise photo recollections in response to user queries.
AI-generated summaries will soon feature on Google Maps, synthesizing insights from the platform's mapping data. For example, users will be able to view AI-generated summaries of places and areas.
Published At
5/15/2024 9:25:26 AM
Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.
Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal?
We appreciate your report.