185 lines
6.6 KiB
Markdown
185 lines
6.6 KiB
Markdown
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
|
|
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
|
rendered properly in your Markdown viewer.
|
|
|
|
-->
|
|
|
|
# XLNet
|
|
|
|
<div class="flex flex-wrap space-x-1">
|
|
<a href="https://huggingface.co/models?filter=xlnet">
|
|
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-xlnet-blueviolet">
|
|
</a>
|
|
<a href="https://huggingface.co/spaces/docs-demos/xlnet-base-cased">
|
|
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
|
|
</a>
|
|
</div>
|
|
|
|
## Overview
|
|
|
|
The XLNet model was proposed in [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov,
|
|
Quoc V. Le. XLnet is an extension of the Transformer-XL model pre-trained using an autoregressive method to learn
|
|
bidirectional contexts by maximizing the expected likelihood over all permutations of the input sequence factorization
|
|
order.
|
|
|
|
The abstract from the paper is the following:
|
|
|
|
*With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves
|
|
better performance than pretraining approaches based on autoregressive language modeling. However, relying on
|
|
corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a
|
|
pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive
|
|
pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all
|
|
permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive
|
|
formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into
|
|
pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large
|
|
margin, including question answering, natural language inference, sentiment analysis, and document ranking.*
|
|
|
|
This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://github.com/zihangdai/xlnet/).
|
|
|
|
## Usage tips
|
|
|
|
- The specific attention pattern can be controlled at training and test time using the `perm_mask` input.
|
|
- Due to the difficulty of training a fully auto-regressive model over various factorization order, XLNet is pretrained
|
|
using only a sub-set of the output tokens as target which are selected with the `target_mapping` input.
|
|
- To use XLNet for sequential decoding (i.e. not in fully bi-directional setting), use the `perm_mask` and
|
|
`target_mapping` inputs to control the attention span and outputs (see examples in
|
|
*examples/pytorch/text-generation/run_generation.py*)
|
|
- XLNet is one of the few models that has no sequence length limit.
|
|
- XLNet is not a traditional autoregressive model but uses a training strategy that builds on that. It permutes the tokens in the sentence, then allows the model to use the last n tokens to predict the token n+1. Since this is all done with a mask, the sentence is actually fed in the model in the right order, but instead of masking the first n tokens for n+1, XLNet uses a mask that hides the previous tokens in some given permutation of 1,…,sequence length.
|
|
- XLNet also uses the same recurrence mechanism as Transformer-XL to build long-term dependencies.
|
|
|
|
## Resources
|
|
|
|
- [Text classification task guide](../tasks/sequence_classification)
|
|
- [Token classification task guide](../tasks/token_classification)
|
|
- [Question answering task guide](../tasks/question_answering)
|
|
- [Causal language modeling task guide](../tasks/language_modeling)
|
|
- [Multiple choice task guide](../tasks/multiple_choice)
|
|
|
|
## XLNetConfig
|
|
|
|
[[autodoc]] XLNetConfig
|
|
|
|
## XLNetTokenizer
|
|
|
|
[[autodoc]] XLNetTokenizer
|
|
- build_inputs_with_special_tokens
|
|
- get_special_tokens_mask
|
|
- create_token_type_ids_from_sequences
|
|
- save_vocabulary
|
|
|
|
## XLNetTokenizerFast
|
|
|
|
[[autodoc]] XLNetTokenizerFast
|
|
|
|
## XLNet specific outputs
|
|
|
|
[[autodoc]] models.xlnet.modeling_xlnet.XLNetModelOutput
|
|
|
|
[[autodoc]] models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput
|
|
|
|
[[autodoc]] models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput
|
|
|
|
[[autodoc]] models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput
|
|
|
|
[[autodoc]] models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput
|
|
|
|
[[autodoc]] models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput
|
|
|
|
[[autodoc]] models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput
|
|
|
|
[[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput
|
|
|
|
[[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput
|
|
|
|
[[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput
|
|
|
|
[[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput
|
|
|
|
[[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput
|
|
|
|
[[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput
|
|
|
|
<frameworkcontent>
|
|
<pt>
|
|
|
|
## XLNetModel
|
|
|
|
[[autodoc]] XLNetModel
|
|
- forward
|
|
|
|
## XLNetLMHeadModel
|
|
|
|
[[autodoc]] XLNetLMHeadModel
|
|
- forward
|
|
|
|
## XLNetForSequenceClassification
|
|
|
|
[[autodoc]] XLNetForSequenceClassification
|
|
- forward
|
|
|
|
## XLNetForMultipleChoice
|
|
|
|
[[autodoc]] XLNetForMultipleChoice
|
|
- forward
|
|
|
|
## XLNetForTokenClassification
|
|
|
|
[[autodoc]] XLNetForTokenClassification
|
|
- forward
|
|
|
|
## XLNetForQuestionAnsweringSimple
|
|
|
|
[[autodoc]] XLNetForQuestionAnsweringSimple
|
|
- forward
|
|
|
|
## XLNetForQuestionAnswering
|
|
|
|
[[autodoc]] XLNetForQuestionAnswering
|
|
- forward
|
|
|
|
</pt>
|
|
<tf>
|
|
|
|
## TFXLNetModel
|
|
|
|
[[autodoc]] TFXLNetModel
|
|
- call
|
|
|
|
## TFXLNetLMHeadModel
|
|
|
|
[[autodoc]] TFXLNetLMHeadModel
|
|
- call
|
|
|
|
## TFXLNetForSequenceClassification
|
|
|
|
[[autodoc]] TFXLNetForSequenceClassification
|
|
- call
|
|
|
|
## TFLNetForMultipleChoice
|
|
|
|
[[autodoc]] TFXLNetForMultipleChoice
|
|
- call
|
|
|
|
## TFXLNetForTokenClassification
|
|
|
|
[[autodoc]] TFXLNetForTokenClassification
|
|
- call
|
|
|
|
## TFXLNetForQuestionAnsweringSimple
|
|
|
|
[[autodoc]] TFXLNetForQuestionAnsweringSimple
|
|
- call
|
|
|
|
</tf>
|
|
</frameworkcontent> |