Unveiling Gemma: Google Commits to Open-Model AI & LLMs

Gemma is built for responsible AI development from the same research and technology used to create Gemini models (Image: Google)
Tech giant Google, with Google DeepMind, launches Gemma, consisting of new new state-of-the-art open AI models built for an open community of developers

Google launches Gemma, its next venture into open AI models.

The launch consists of model weights in two sizes, Gemma 2B and Gemma 7B, which have been released with pre-trained and instruction-tuned variants. Gemma is ultimately designed for small, work-based tasks such as chatbots or summarisations.

According to Google, pre-trained and instruction-tuned Gemma models can run on a laptop, workstation or Google Cloud with easy deployment on Vertex AI and Google Kubernetes Engine (GKE). Gemma models also use NVIDIA GPUs and Google Cloud TPUs to ensure industry-leading performance.

“At Google, we believe in making AI helpful for everyone,” the company said via its announcement. “Today, we’re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly.”

Harnessing the power of NVIDIA GPUs

Gemma is built for responsible AI development from the same research and technology used to create Gemini models, making them capable of being fine-tuned to suit a range of business use cases, in addition to running on a range of platforms.

Gemini, which debuted at the end of 2023, was so-described by the company as its largest and most capable model yet. The model boasts sophisticated multi-modal capabilities and can master human-style conversations, language and content, in addition to understanding and interpreting images, code and data and analytics for developers to create new AI models.

The AI has since been rolled out on a wider scale, with the company’s conversational chatbot Bard being rebranded to Gemini.

Youtube Placeholder

Google and the Google DeepMind teams have worked to ensure that the AI is responsible by design, having undergone copious amounts of research to consider both opportunities and risks that Gemma can bring to users.

In line with this, the organisation has also released a new Responsible Generative AI Toolkit alongside the Gemma launch to help developers and researchers prioritise building safe and responsible AI applications. 

Gemma is also optimised across its hardware, with Google partnering with NVIDIA to utilise its GPUs to ensure industry-leading performance from data centres to the cloud. The models are also able to run across a range of devices, including laptop, desktop, IoT, mobile and the cloud - ultimately to enable AI that is very broadly accessible.

A commitment to sharing open AI models

Whilst these are open models, the AI itself is not open source.

Despite this, Google is very much part of a technology community that is starting to commit further to open models for AI. Despite not following in the footsteps of companies like Meta who have open sourced its AI more broadly, Google states that Gemma enables AI developers to develop AI more easily, whilst still ensuring that this type of technology is being harnessed in a safe and responsible way.

“We have a long history of supporting responsible open source and science, which can drive rapid research progress,” CEO of Google DeepMind Demis Hassabis said on X. “We’re proud to release Gemma: a set of lightweight open models, best-in-class for their size, inspired by the same tech used for Gemini.”


Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024


AI Magazine is a BizClik brand


Featured Articles

Jitterbit CEO: Confronting the Challenges of Business AI

AI Magazine speaks with the President & CEO and Jitterbit, Bill Conner, about the growing AI hype and how it can be integrated into a business successfully

Graphcore: Who is the Nvidia Challenger SoftBank Acquired?

SoftBank's acquisition of the UK startup Graphcore could accelerate development of the more efficient IPU AI chips and challenge chip giant Nvidia

Amazon Takes On AI Hallucinations Across Its AI Portfolio

Amazon is upgrading the memory capacity across a range of its services to improve the accuracy of responses Gen AI systems return to prompts

LG’s Athom Acquisition to Accelerate AI-Enabled Smart Homes

AI Applications

Why AI is Behind Samsung’s Expected 15-Fold Profit Surge

AI Strategy

AI Patent Race: What China’s Dominance Means for the Market