Update RoBERTa tips

This commit is contained in:
Lysandre 2020-02-07 12:17:51 -05:00
parent db97930122
commit dd28830327
1 changed files with 3 additions and 0 deletions

View File

@ -23,6 +23,9 @@ Tips:
- This implementation is the same as :class:`~transformers.BertModel` with a tiny embeddings tweak as well as a - This implementation is the same as :class:`~transformers.BertModel` with a tiny embeddings tweak as well as a
setup for Roberta pretrained models. setup for Roberta pretrained models.
- RoBERTa has the same architecture as BERT, but uses a byte-level BPE as a tokenizer (same as GPT-2) and uses a
different pre-training scheme.
- RoBERTa doesn't have `token_type_ids`, you don't need to indicate which token belongs to which segment. Just separate your segments with the separation token `tokenizer.sep_token` (or `</s>`)
- `Camembert <./camembert.html>`__ is a wrapper around RoBERTa. Refer to this page for usage examples. - `Camembert <./camembert.html>`__ is a wrapper around RoBERTa. Refer to this page for usage examples.
RobertaConfig RobertaConfig