8eeefcb576
I found there are two grammar errors or typo issues in the explanation of the encoding properties. The original sentences: If your was made of multiple \"parts\" such as (question, context), then this would be a vector with for each token the segment it belongs to If your has been truncated into multiple subparts because of a length limit (for BERT for example the sequence length is limited to 512), this will contain all the remaining overflowing parts. I think "input" should be inserted after the phrase "If your". |
||
---|---|---|
.. | ||
01-training-tokenizers.ipynb | ||
02-transformers.ipynb | ||
03-pipelines.ipynb | ||
README.md |
README.md
Transformers Notebooks
You can find here a list of the official notebooks provided by Hugging Face.
Also, we would like to list here interesting content created by the community. If you wrote some notebook(s) leveraging transformers and would like be listed here, please open a Pull Request and we'll review it so it can be included here.
Hugging Face's notebooks 🤗
Notebook | Description | |
---|---|---|
Getting Started Tokenizers | How to train and use your very own tokenizer | |
Getting Started Transformers | How to easily start using transformers | |
How to use Pipelines | Simple and efficient way to use State-of-the-Art models on downstream tasks through transformers | |
How to train a language model | Highlight all the steps to effectively train Transformer model on custom data | |
How to generate text | How to use different decoding methods for language generation with transformers |