transformers/docs/source/en/model_doc/gemma.md

2.7 KiB

Gemma

Overview

The Gemma model was proposed in Gemma: Open Models Based on Gemini Technology and Research by Gemma Team, Google. Gemma models are trained on 6T tokens, and released with 2 versions, 2b and 7b.

The abstract from the paper is the following:

This work introduces Gemma, a new family of open language models demonstrating strong performance across academic benchmarks for language understanding, reasoning, and safety. We release two sizes of models (2 billion and 7 billion parameters), and provide both pretrained and fine-tuned checkpoints. Gemma outperforms similarly sized open models on 11 out of 18 text-based tasks, and we present comprehensive evaluations of safety and responsibility aspects of the models, alongside a detailed description of our model development. We believe the responsible release of LLMs is critical for improving the safety of frontier models, and for enabling the next wave of LLM innovations

Tips:

  • The original checkpoints can be converted using the conversion script src/transformers/models/gemma/convert_gemma_weights_to_hf.py

This model was contributed by Arthur Zucker, Younes Belkada, Sanchit Gandhi, Pedro Cuenca.

GemmaConfig

autodoc GemmaConfig

GemmaTokenizer

autodoc GemmaTokenizer

GemmaTokenizerFast

autodoc GemmaTokenizerFast

GemmaModel

autodoc GemmaModel - forward

GemmaForCausalLM

autodoc GemmaForCausalLM - forward

GemmaForSequenceClassification

autodoc GemmaForSequenceClassification - forward

GemmaForTokenClassification

autodoc GemmaForTokenClassification - forward

FlaxGemmaModel

autodoc FlaxGemmaModel - call

FlaxGemmaForCausalLM

autodoc FlaxGemmaForCausalLM - call