Google Releases Updated Gemini 2.0 AI for Everyone
Google is using its second-generation AI models to update its Gemini AI chatbot. An experimental version of the flagship Gemini 2.0 Pro model is being made available to Gemini Advanced customers as part of the upgrade, while the Gemini 2.0 Flash model is now accessible to all users.
The Gemini 2.0 Flash-Lite model, which Google says is its most economical AI model to date, has also been unveiled. Additionally, the Gemini app will now feature Google’s experimental Gemini 2.0 Flash Thinking mode. This mode, which was first available in Google AI Studio and Vertex AI in December of last year, provides better reasoning skills than the Gemini 2.0 Flash model.
Google has said that the revised Gemini 2.0 AI will be widely accessible to everyone as the competition to develop new and improved AI models intensifies. With Gemini 2.0 Flash, developers can now create production applications.
Additionally, the business is launching an experimental version of Gemini 2.0 Pro, which is its greatest model to date in terms of sophisticated prompts and coding performance. It is accessible through Vertex AI, Google AI Studio, and the Gemini app for Gemini Advanced users.
Our most economical model to date, Gemini 2.0 Flash-Lite, is being made available to the public in preview form in Google AI Studio and Vertex AI. Lastly, users of the Gemini app will be able to access 2.0 Flash Thinking Experimental through the desktop and mobile model dropdown, Koray Kavukcuoglu, CTO, Google DeepMind, announced on behalf of the Gemini team.
Google stated that it will keep making investments in strong safeguards that allow for safe and secure use as the Gemini model family grows in capability.
“We’re also using automated red teaming to evaluate safety and security threats, such as those from indirect prompt injection, a kind of cyberattack where hackers conceal malicious instructions in data that an AI system is likely to retrieve,” the company stated.
Multimodal input with text output will be available in all of these models at release, and additional modalities will be prepared for widespread use in the upcoming months.
With superior comprehension and reasoning of global information, the experimental version of Gemini 2.0 Pro exhibits the best coding performance and the capacity to manage challenging prompts.
The business said that it has the capacity to call tools like Google Search and execute code, and that it has our largest context window at 2 million tokens, which allows it to thoroughly study and comprehend enormous volumes of data.
Gemini 2.0 Flash-Lite
The most economical Gemini model to date is Google’s new 2.0 Flash-Lite variant. Similar to the 2.0 Flash model, it supports multimodal input and has a context window with one million tokens, but it still has the same speed and cost as the 1.5 Flash model. The public can now preview Gemini 2.0 Flash-Lite using Vertex AI and Google AI Studio.
2.0 The Gemini app’s Flash Thinking Experimental
The Gemini 2.0 Flash Thinking Experimental mode will be added by Google to the Gemini app and will be accessible through the desktop and mobile model dropdown menus. This mode, which was formerly exclusive to Google AI Studio and Vertex AI, aims to improve reasoning by clearly illustrating its mental process.