transformers/notebooks
Kyeongpil Kang 8eeefcb576
Update 01-training-tokenizers.ipynb (typo issue) (#3343)
I found there are two grammar errors or typo issues in the explanation of the encoding properties.

The original sentences:
If your was made of multiple \"parts\" such as (question, context), then this would be a vector with for each token the segment it belongs to
If your has been truncated into multiple subparts because of a length limit (for BERT for example the sequence length is limited to 512), this will contain all the remaining overflowing parts.

I think "input" should be inserted after the phrase "If your".
2020-03-19 23:21:49 +01:00
..
01-training-tokenizers.ipynb Update 01-training-tokenizers.ipynb (typo issue) (#3343) 2020-03-19 23:21:49 +01:00
02-transformers.ipynb Updated `Tokenw ise` in print statement to `Token wise` 2020-03-08 10:55:30 -04:00
03-pipelines.ipynb Improve fill-mask pipeline example in 03-pipelines notebook. 2020-03-18 17:11:42 +01:00
README.md Fix wrong link for the notebook file (#3344) 2020-03-19 17:22:47 +01:00

README.md

Transformers Notebooks

You can find here a list of the official notebooks provided by Hugging Face.

Also, we would like to list here interesting content created by the community. If you wrote some notebook(s) leveraging transformers and would like be listed here, please open a Pull Request and we'll review it so it can be included here.

Hugging Face's notebooks 🤗

Notebook Description
Getting Started Tokenizers How to train and use your very own tokenizer Open In Colab
Getting Started Transformers How to easily start using transformers Open In Colab
How to use Pipelines Simple and efficient way to use State-of-the-Art models on downstream tasks through transformers Open In Colab
How to train a language model Highlight all the steps to effectively train Transformer model on custom data Open in Colab
How to generate text How to use different decoding methods for language generation with transformers Open in Colab