transformers/docs/source/en/model_doc/xlm-roberta.md

12 KiB

XLM-RoBERTa

Overview

The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data.

The abstract from the paper is the following:

This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +13.8% average accuracy on XNLI, +12.3% average F1 score on MLQA, and +2.1% average F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 11.8% in XNLI accuracy for Swahili and 9.2% for Urdu over the previous XLM model. We also present a detailed empirical evaluation of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-Ris very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make XLM-R code, data, and models publicly available.

This model was contributed by stefan-it. The original code can be found here.

Usage tips

  • XLM-RoBERTa is a multilingual model trained on 100 different languages. Unlike some XLM multilingual models, it does not require lang tensors to understand which language is used, and should be able to determine the correct language from the input ids.
  • Uses RoBERTa tricks on the XLM approach, but does not use the translation language modeling objective. It only uses masked language modeling on sentences coming from one language.

Resources

A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with XLM-RoBERTa. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.

Multiple choice

🚀 Deploy

This implementation is the same as RoBERTa. Refer to the documentation of RoBERTa for usage examples as well as the information relative to the inputs and outputs.

XLMRobertaConfig

autodoc XLMRobertaConfig

XLMRobertaTokenizer

autodoc XLMRobertaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary

XLMRobertaTokenizerFast

autodoc XLMRobertaTokenizerFast

XLMRobertaModel

autodoc XLMRobertaModel - forward

XLMRobertaForCausalLM

autodoc XLMRobertaForCausalLM - forward

XLMRobertaForMaskedLM

autodoc XLMRobertaForMaskedLM - forward

XLMRobertaForSequenceClassification

autodoc XLMRobertaForSequenceClassification - forward

XLMRobertaForMultipleChoice

autodoc XLMRobertaForMultipleChoice - forward

XLMRobertaForTokenClassification

autodoc XLMRobertaForTokenClassification - forward

XLMRobertaForQuestionAnswering

autodoc XLMRobertaForQuestionAnswering - forward

TFXLMRobertaModel

autodoc TFXLMRobertaModel - call

TFXLMRobertaForCausalLM

autodoc TFXLMRobertaForCausalLM - call

TFXLMRobertaForMaskedLM

autodoc TFXLMRobertaForMaskedLM - call

TFXLMRobertaForSequenceClassification

autodoc TFXLMRobertaForSequenceClassification - call

TFXLMRobertaForMultipleChoice

autodoc TFXLMRobertaForMultipleChoice - call

TFXLMRobertaForTokenClassification

autodoc TFXLMRobertaForTokenClassification - call

TFXLMRobertaForQuestionAnswering

autodoc TFXLMRobertaForQuestionAnswering - call

FlaxXLMRobertaModel

autodoc FlaxXLMRobertaModel - call

FlaxXLMRobertaForCausalLM

autodoc FlaxXLMRobertaForCausalLM - call

FlaxXLMRobertaForMaskedLM

autodoc FlaxXLMRobertaForMaskedLM - call

FlaxXLMRobertaForSequenceClassification

autodoc FlaxXLMRobertaForSequenceClassification - call

FlaxXLMRobertaForMultipleChoice

autodoc FlaxXLMRobertaForMultipleChoice - call

FlaxXLMRobertaForTokenClassification

autodoc FlaxXLMRobertaForTokenClassification - call

FlaxXLMRobertaForQuestionAnswering

autodoc FlaxXLMRobertaForQuestionAnswering - call