PaLM 2 – The Google’s New Large Language Model

Is Google’s new LLM adequate? Can it compete with OpenAI’s GPT-4?

While OpenAI is at the forefront of generative AI research, many have accused Google of falling behind. In an effort to catch up, Google unveiled PaLM 2, a new massive language model, during its Google I/O conference in 2023.

Google’s new LLM, which will be available in four different sizes for a variety of applications, is reportedly already powering multiple Google services, with much more to come.

What Exactly Is PaLM 2?

Google CEO Sundar Pichai presented Google’s latest toy, PaLM 2, at Google I/O 2023 on May 10.

Google’s improved LLM, short for Pathways Language Model 2, is the second generation of PaLM, with the first version arriving in April 2022. Can’t recall PaLM? It was significant news at the time, and it drew a lot of attention for its ability to communicate a bit, give rudimentary jokes, and so on. Six months later, OpenAI’s GPT-3.5 blasted everything else out of the water, including PaLM.

Gecko is so light that it can run on mobile devices and is fast enough to run fantastic interactive apps on the device even when it is not connected to the internet. Because of its adaptability, PaLM 2 can be fine-tuned to support entire classes of products in more ways, allowing it to assist more people.

Gecko, which can process roughly 20 tokens per second (tokens are the values supplied to real words for usage by generative AI models), appears to be a game changer for mobile deployable AI tools.

Data for PaLM 2 Training

Given that PaLM 2 was just introduced, Google wasn’t exactly helpful with training data. However, Google’s PaLM 2 Report [PDF] said that the company desired PaLM 2 to have a better knowledge of mathematics, logic, and science and that a major portion of its training corpus was devoted to these subjects.

Still, it’s worth remembering that PaLM wasn’t exactly a slouch. When Google first announced PaLM, it disclosed that it had been trained on 540 billion parameters, which was a massive quantity at the time.

Data for PaLM 2 Training

OpenAI’s GPT-4 is said to employ over a trillion parameters, with some speculating that the total might be as high as 1.7 trillion. Because Google wants PaLM 2 to compete directly with OpenAI’s LLMs, it’s a reasonable guess that it’ll have at least a comparable figure, if not more.

PaLM 2 also benefits greatly from language training data. Google has trained PaLM 2 in over 100 languages to improve its depth and contextual knowledge, as well as to improve its translation skills.

However, it is not only spoken languages. In addition to being taught in more than 20 programming languages, the LLM has been trained in Google’s need for PaLM 2 to provide stronger scientific and mathematical thinking, making it a fantastic asset for programmers.

PaLM 2 is already powering Google services, but it still has to be fine-tuned.

We won’t have to wait long to get our hands on PaLM 2 and discover what it can achieve. Hopefully, the introduction of any PaLM 2 apps and services will be better than the launch of Bard.

It is possible that you have already utilized PaLM 2 in the past. Interestingly, Google has disclosed that PaLM 2 is currently employed in 25 of their products, such as Android, YouTube, Gmail, Google Docs, Google Slides, and Google Sheets.

The PaLM 2 research suggests that there is more work to be done, particularly in terms of harmful reactions across a variety of languages.

PaLM 2, for example, elicits toxic reactions more than 30% of the time when given particular toxic triggers. Furthermore, PaLM 2 generated toxic replies more than 17 percent of the time in certain languages—English, German, and Portuguese—with prompts incorporating ethnic identities and faiths driving that proportion even higher.

Despite researchers’ efforts to purify LLM training data, some of it will inevitably slip through. Therefore, the subsequent action is to continue training PaLM 2 to decrease adverse responses.

It’s a boom time for large language models

Although OpenAI was not the first to release a major language model, its GPT-3, GPT-3.5, and GPT-4 models unquestionably ignited the blue touchpaper on generative AI.

Google’s PaLM 2 has some challenges to work out, but the fact that it is now in use in multiple Google services demonstrates the company’s confidence in its newest LLM.

Wrapping Up

Google’s new massive language model, PaLM 2, was unveiled during the Google I/O conference in 2023 as an effort to catch up with OpenAI in generative AI research. PaLM 2 has been trained in over 100 languages and more than 20 programming languages, with a major portion of its training corpus devoted to mathematics, logic, and science. PaLM 2 is reportedly already powering multiple Google services, although it still needs fine-tuning to minimize harmful reactions. PaLM 2 is expected to compete directly with OpenAI’s GPT-4, which is said to employ over a trillion parameters. It is a boom time for large language models, and the introduction of PaLM 2 will hopefully lead to better AI apps and services.

A WP Life
A WP Life

Hi! We are A WP Life, we develop best WordPress themes and plugins for blog and websites.