transformers/docs/source/en/model_doc/recurrent_gemma.md

2.2 KiB
Raw Permalink Blame History

RecurrentGemma

Overview

The Recurrent Gemma model was proposed in RecurrentGemma: Moving Past Transformers for Efficient Open Language Models by the Griffin, RLHF and Gemma Teams of Google.

The abstract from the paper is the following:

We introduce RecurrentGemma, an open language model which uses Googles novel Griffin architecture. Griffin combines linear recurrences with local attention to achieve excellent performance on language. It has a fixed-sized state, which reduces memory use and enables efficient inference on long sequences. We provide a pre-trained model with 2B non-embedding parameters, and an instruction tuned variant. Both models achieve comparable performance to Gemma-2B despite being trained on fewer tokens.

Tips:

This model was contributed by Arthur Zucker. The original code can be found here.

RecurrentGemmaConfig

autodoc RecurrentGemmaConfig

RecurrentGemmaModel

autodoc RecurrentGemmaModel - forward

RecurrentGemmaForCausalLM

autodoc RecurrentGemmaForCausalLM - forward