6.8 KiB
Transformer XL
This model is in maintenance mode only, so we won't accept any new PRs changing its code. This model was deprecated due to security issues linked to pickle.load
.
We recommend switching to more recent models for improved security.
In case you would still like to use TransfoXL
in your experiments, we recommend using the Hub checkpoint with a specific revision to ensure you are downloading safe files from the Hub.
You will need to set the environment variable TRUST_REMOTE_CODE
to True
in order to allow the
usage of pickle.load()
:
import os
from transformers import TransfoXLTokenizer, TransfoXLLMHeadModel
os.environ["TRUST_REMOTE_CODE"] = "True"
checkpoint = 'transfo-xl/transfo-xl-wt103'
revision = '40a186da79458c9f9de846edfaea79c412137f97'
tokenizer = TransfoXLTokenizer.from_pretrained(checkpoint, revision=revision)
model = TransfoXLLMHeadModel.from_pretrained(checkpoint, revision=revision)
If you run into any issues running this model, please reinstall the last version that supported this model: v4.35.0.
You can do so by running the following command: pip install -U transformers==4.35.0
.
Overview
The Transformer-XL model was proposed in Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context by Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. It's a causal (uni-directional) transformer with relative positioning (sinusoïdal) embeddings which can reuse previously computed hidden-states to attend to longer context (memory). This model also uses adaptive softmax inputs and outputs (tied).
The abstract from the paper is the following:
Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and a novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves the context fragmentation problem. As a result, Transformer-XL learns dependency that is 80% longer than RNNs and 450% longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up to 1,800+ times faster than vanilla Transformers during evaluation. Notably, we improve the state-of-the-art results of bpc/perplexity to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on Penn Treebank (without finetuning). When trained only on WikiText-103, Transformer-XL manages to generate reasonably coherent, novel text articles with thousands of tokens.
This model was contributed by thomwolf. The original code can be found here.
Usage tips
- Transformer-XL uses relative sinusoidal positional embeddings. Padding can be done on the left or on the right. The original implementation trains on SQuAD with padding on the left, therefore the padding defaults are set to left.
- Transformer-XL is one of the few models that has no sequence length limit.
- Same as a regular GPT model, but introduces a recurrence mechanism for two consecutive segments (similar to a regular RNNs with two consecutive inputs). In this context, a segment is a number of consecutive tokens (for instance 512) that may span across multiple documents, and segments are fed in order to the model.
- Basically, the hidden states of the previous segment are concatenated to the current input to compute the attention scores. This allows the model to pay attention to information that was in the previous segment as well as the current one. By stacking multiple attention layers, the receptive field can be increased to multiple previous segments.
- This changes the positional embeddings to positional relative embeddings (as the regular positional embeddings would give the same results in the current input and the current hidden state at a given position) and needs to make some adjustments in the way attention scores are computed.
TransformerXL does not work with torch.nn.DataParallel due to a bug in PyTorch, see issue #36035
Resources
TransfoXLConfig
autodoc TransfoXLConfig
TransfoXLTokenizer
autodoc TransfoXLTokenizer - save_vocabulary
TransfoXL specific outputs
autodoc models.deprecated.transfo_xl.modeling_transfo_xl.TransfoXLModelOutput
autodoc models.deprecated.transfo_xl.modeling_transfo_xl.TransfoXLLMHeadModelOutput
autodoc models.deprecated.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLModelOutput
autodoc models.deprecated.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLLMHeadModelOutput
TransfoXLModel
autodoc TransfoXLModel - forward
TransfoXLLMHeadModel
autodoc TransfoXLLMHeadModel - forward
TransfoXLForSequenceClassification
autodoc TransfoXLForSequenceClassification - forward
TFTransfoXLModel
autodoc TFTransfoXLModel - call
TFTransfoXLLMHeadModel
autodoc TFTransfoXLLMHeadModel - call
TFTransfoXLForSequenceClassification
autodoc TFTransfoXLForSequenceClassification - call
Internal Layers
autodoc AdaptiveEmbedding
autodoc TFAdaptiveEmbedding