The latest in technology, Marketing and Startups.

Google introduces Gemini, a next-generation AI model for all devices.

Today, Google revealed Gemini, their most potent generative AI (genAI) software model to date. Available in three sizes, Gemini may be utilized in a variety of settings, including mobile devices and data centers.

Over the last eight months, Google has been working on the Gemini large language model (LLM), and lately, it made an early version of the model available to a select set of businesses.

According to the business, the conversational genAI tool is by far Google’s most potent and has the potential to seriously rival existing LLMs like OpenAI’s GPT-4 and Meta’s Llama 2.


Google CEO Sundar Pichai said in a blog post, “This new era of models represents one of the biggest science and engineering efforts we’ve undertaken as a company.”

The new LLM, sometimes referred to as a multimodal model, can accept information from other sources, including audio, video, and images. Traditionally, multimodal model creation involves training discrete parts for several modalities and then piecing them together.

According to Pichai, “these models can be good at certain tasks, like describing images, but struggle with more conceptual and complex reasoning.” “Gemini was pre-trained on many modalities from the beginning and is inherently multimodal. After that, we adjusted it using more multimodal data to boost its efficacy even further.

Three distinct sizes will be available for Gemini 1.0:


The biggest “and most capable” model for very complicated jobs is Gemini Ultra.
The model “best suited” for scaling across a broad variety of workloads is Gemini Pro.
A version designed for on-device tasks is called Gemini Nano.
Google also unveiled the Cloud TPU v5p, their most powerful ASIC processor, in tandem with the launch. This chip was created expressly to meet the enormous processing needs of artificial intelligence. According to the firm, the new processor can train LLMs 2.8 times quicker than Google’s prior TPU v4.

For ChatGPT and Bard, two examples of generative AI chatbots, LLMs are the algorithmic platforms.

The Cloud TPU v5e, which touted 2.3 times the price performance over the previous generation TPU v4, was made generally available by Google earlier last year. The TPU v5p is much quicker than the v4, but it costs three and a half times as much.

Some of Google’s key products now come with Google’s new Gemini LLM. For instance, the Bard chatbot uses a more sophisticated form of Gemini Pro for planning, comprehending, and reasoning.

The Pixel 8 Pro is the first smartphone designed with the Gemini Nano in mind, using it for functions like Gboard’s Smart Reply and Recorder’s Summarize. And in Search, where it’s speeding up our Search Generative Experience (SGE), we’re already beginning to experiment with Gemini,” said Google. We want to introduce Gemini Ultra to a new Bard Advanced experience early in the next year. Additionally, Gemini will power features in more of our services and products, including Ads, Chrome, and Duet AI, in the next months.

Through Android AICore, developers of Android applications who want to create apps with Gemini capabilities for mobile devices may now register for an early beta of Gemini Nano.

Developers and corporate clients may use the Gemini API in Vertex AI or Google AI Studio, the company’s free web-based development tool, to access Gemini Pro as of December 13. Further improvements to Gemini Ultra, including thorough security and trust assessments, led Google to announce that it would be made accessible to a limited number of users in early 2024, ahead of developers and business clients.

Google also unveiled its AI Cloud Hypercomputer and a new AI accelerator called a tensor processing unit (TPU). In terms of total accessible FLOPs per AI pod, the new TPU v5p is four times more scalable than TPU v4.
The big dataset that LLMs ingest requires enormous processing power since it must first undergo a process known as data pre-processing, organizing, or sometimes labeling before anything useful can be done with it. The next step is for the LLM to learn how to read the data so that it can produce the next word, picture, or computer code line that the user queries desire.

During training, LLMs could have to pick up billions, if not more than a trillion, of new parameters.

Google also unveiled its “AI Hypercomputer” from Google Cloud, a supercomputer architecture that uses an integrated set of machine-learning frameworks, open software, performance-optimized hardware, and customizable consumption models, in addition to the new CPU.

Google claims that users may increase productivity and efficiency in AI training, tweaking, and serving by using the AI Hypercomputer.

Eltrys Team
Author: Eltrys Team

Share this article
0
Share
Shareable URL
Prev Post

AWS accuses Microsoft of engaging in anti-competitive conduct in the UK.

Next Post

Microsoft will provide enterprises and individual users with extended Windows 10 security upgrades.

Leave a Reply

Your email address will not be published. Required fields are marked *

Read next
Subscribe to our newsletter
Get notified about our latest news and insights