transformers/docs/source/en/model_doc/bert.md

14 KiB
Raw Permalink Blame History

BERT

Overview

The BERT model was proposed in BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. It's a bidirectional transformer pretrained using a combination of masked language modeling objective and next sentence prediction on a large corpus comprising the Toronto Book Corpus and Wikipedia.

The abstract from the paper is the following:

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.

BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).

This model was contributed by thomwolf. The original code can be found here.

Usage tips

  • BERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left.

  • BERT was trained with the masked language modeling (MLM) and next sentence prediction (NSP) objectives. It is efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation.

  • Corrupts the inputs by using random masking, more precisely, during pretraining, a given percentage of tokens (usually 15%) is masked by:

    • a special mask token with probability 0.8
    • a random token different from the one masked with probability 0.1
    • the same token with probability 0.1
  • The model must predict the original sentence, but has a second objective: inputs are two sentences A and B (with a separation token in between). With probability 50%, the sentences are consecutive in the corpus, in the remaining 50% they are not related. The model has to predict if the sentences are consecutive or not.

Resources

A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BERT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.

Multiple choice

Inference

⚙️ Pretraining

🚀 Deploy

BertConfig

autodoc BertConfig - all

BertTokenizer

autodoc BertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary

BertTokenizerFast

autodoc BertTokenizerFast

TFBertTokenizer

autodoc TFBertTokenizer

Bert specific outputs

autodoc models.bert.modeling_bert.BertForPreTrainingOutput

autodoc models.bert.modeling_tf_bert.TFBertForPreTrainingOutput

autodoc models.bert.modeling_flax_bert.FlaxBertForPreTrainingOutput

BertModel

autodoc BertModel - forward

BertForPreTraining

autodoc BertForPreTraining - forward

BertLMHeadModel

autodoc BertLMHeadModel - forward

BertForMaskedLM

autodoc BertForMaskedLM - forward

BertForNextSentencePrediction

autodoc BertForNextSentencePrediction - forward

BertForSequenceClassification

autodoc BertForSequenceClassification - forward

BertForMultipleChoice

autodoc BertForMultipleChoice - forward

BertForTokenClassification

autodoc BertForTokenClassification - forward

BertForQuestionAnswering

autodoc BertForQuestionAnswering - forward

TFBertModel

autodoc TFBertModel - call

TFBertForPreTraining

autodoc TFBertForPreTraining - call

TFBertModelLMHeadModel

autodoc TFBertLMHeadModel - call

TFBertForMaskedLM

autodoc TFBertForMaskedLM - call

TFBertForNextSentencePrediction

autodoc TFBertForNextSentencePrediction - call

TFBertForSequenceClassification

autodoc TFBertForSequenceClassification - call

TFBertForMultipleChoice

autodoc TFBertForMultipleChoice - call

TFBertForTokenClassification

autodoc TFBertForTokenClassification - call

TFBertForQuestionAnswering

autodoc TFBertForQuestionAnswering - call

FlaxBertModel

autodoc FlaxBertModel - call

FlaxBertForPreTraining

autodoc FlaxBertForPreTraining - call

FlaxBertForCausalLM

autodoc FlaxBertForCausalLM - call

FlaxBertForMaskedLM

autodoc FlaxBertForMaskedLM - call

FlaxBertForNextSentencePrediction

autodoc FlaxBertForNextSentencePrediction - call

FlaxBertForSequenceClassification

autodoc FlaxBertForSequenceClassification - call

FlaxBertForMultipleChoice

autodoc FlaxBertForMultipleChoice - call

FlaxBertForTokenClassification

autodoc FlaxBertForTokenClassification - call

FlaxBertForQuestionAnswering

autodoc FlaxBertForQuestionAnswering - call