Google actively made the introduction of PaLM 2, a family of foundational language models comparable to OpenAI’s GPT-4. Google at its I/O event in Mountain View, California, made the launch of this news that it already makes use of PaLM 2 to power 25 products, which are inclusive of its Bard conversational AI assistant.
Being a family of LLMs, PaLM 2 has been trained on an enormous volume of data. Along with this, the predictions regarding the next-word output are the most likely text after a prompt input by humans. PaLM basically means “Pathways Language Model,” and “Pathways” is a machine-learning technique which is formulated at Google. PaLM 2 accomplishes the follow-up on the original PaLM,
PaLM 2 supports 100+ languages and can perform the tasks like reasoning, code generation, and multi-lingual translation. Sundar Pichai said that PaLM 2 comes in four sizes: Gecko, Otter, Bison, and Unicorn. Among which Gecko is the smallest that can be operated on a mobile device.
Now the question that comes into my mind is how does PaLM 2 stack up to GPT-4?
PaLM 2 has the capability to beat GPT-4 in some mathematical, translation, and reasoning tasks. But the reality might not match Google’s benchmarks. In a cursory evaluation of the PaLM 2 version of Bard by Ethan Mollick, a Wharton professor who often writes about AI sound that PaLM 2’s performance appears worse than GPT-4.
The family of the PaLM of language models is an internal Google Research product with no consumer exposure, but Google began offering limited API access. Still, the first PaLM was remarkable for its massive size which is around 540 billion parameters. Parameters are numerical variables that can serve as the learned “knowledge” of the model. This can be helpful in enabling predictions and generating text based on the input it receives.