2.2 KiB
RecurrentGemma
Overview
The Recurrent Gemma model was proposed in RecurrentGemma: Moving Past Transformers for Efficient Open Language Models by the Griffin, RLHF and Gemma Teams of Google.
The abstract from the paper is the following:
We introduce RecurrentGemma, an open language model which uses Google’s novel Griffin architecture. Griffin combines linear recurrences with local attention to achieve excellent performance on language. It has a fixed-sized state, which reduces memory use and enables efficient inference on long sequences. We provide a pre-trained model with 2B non-embedding parameters, and an instruction tuned variant. Both models achieve comparable performance to Gemma-2B despite being trained on fewer tokens.
Tips:
- The original checkpoints can be converted using the conversion script
src/transformers/models/recurrent_gemma/convert_recurrent_gemma_weights_to_hf.py
.
This model was contributed by Arthur Zucker. The original code can be found here.
RecurrentGemmaConfig
autodoc RecurrentGemmaConfig
RecurrentGemmaModel
autodoc RecurrentGemmaModel - forward
RecurrentGemmaForCausalLM
autodoc RecurrentGemmaForCausalLM - forward