4df7d05a87
* Convert PretrainedConfig doc to Markdown * Use syntax * Add necessary doc files (#14496) * Doc fixes (#14499) * Fixes for the new front * Convert DETR file for table * Title is needed * Simplify a bit * Even simpler * Remove imports * Fix typo in toctree (#14516) * Fix checkpoints badge * Update versions.yml format (#14517) * Doc new front github actions (#14512) * Doc new front github actions * Fix docstring * Fix feature extraction utils import (#14515) * Address Julien's comments * Push to doc-builder * Ready for merge * Remove old build and deploy * Doc misc fixes (#14583) * Rm versions.yml from doc * Fix converting.rst * Rm pretrained_models from toctree * Fix index links (#14567) * Fix links in README * Localized READMEs * Fix copy script * Fix find doc script * Update README_ko.md Co-authored-by: Julien Chaumond <julien@huggingface.co> Co-authored-by: Julien Chaumond <julien@huggingface.co> * Adapt build command to new CLI tools (#14578) * Fix typo * Fix doc interlinks (#14589) * Convert PretrainedConfig doc to Markdown * Use syntax * Rm pattern <[a-z]+(.html).*> * Rm huggingface.co/transformers/master * Rm .html * Rm .html from index.mdx * Rm .html from model_summary.rst * Update index.mdx rm html * Update remove .html * Fix inner doc links * Fix interlink in preprocssing.rst * Update pr_checks Co-authored-by: Sylvain Gugger <sylvain.gugger@gmail.com> Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Convert PretrainedConfig doc to Markdown * Use syntax * Add necessary doc files (#14496) * Doc fixes (#14499) * Fixes for the new front * Convert DETR file for table * Title is needed * Simplify a bit * Even simpler * Remove imports * Fix checkpoints badge * Fix typo in toctree (#14516) * Update versions.yml format (#14517) * Doc new front github actions (#14512) * Doc new front github actions * Fix docstring * Fix feature extraction utils import (#14515) * Address Julien's comments * Push to doc-builder * Ready for merge * Remove old build and deploy * Doc misc fixes (#14583) * Rm versions.yml from doc * Fix converting.rst * Rm pretrained_models from toctree * Fix index links (#14567) * Fix links in README * Localized READMEs * Fix copy script * Fix find doc script * Update README_ko.md Co-authored-by: Julien Chaumond <julien@huggingface.co> Co-authored-by: Julien Chaumond <julien@huggingface.co> * Adapt build command to new CLI tools (#14578) * Fix typo * Fix doc interlinks (#14589) * Convert PretrainedConfig doc to Markdown * Use syntax * Rm pattern <[a-z]+(.html).*> * Rm huggingface.co/transformers/master * Rm .html * Rm .html from index.mdx * Rm .html from model_summary.rst * Update index.mdx rm html * Update remove .html * Fix inner doc links * Fix interlink in preprocssing.rst * Update pr_checks Co-authored-by: Sylvain Gugger <sylvain.gugger@gmail.com> Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Styling Co-authored-by: Mishig Davaadorj <mishig.davaadorj@coloradocollege.edu> Co-authored-by: Lysandre Debut <lysandre@huggingface.co> Co-authored-by: Julien Chaumond <julien@huggingface.co> |
||
---|---|---|
.. | ||
README.md |
README.md
🤗 Transformers Notebooks
You can find here a list of the official notebooks provided by Hugging Face.
Also, we would like to list here interesting content created by the community. If you wrote some notebook(s) leveraging 🤗 Transformers and would like be listed here, please open a Pull Request so it can be included under the Community notebooks.
Hugging Face's notebooks 🤗
Documentation notebooks
You can open any page of the documentation as a notebook in colab (there is a button directly on said pages) but they are also listed here if you need to:
Notebook | Description | |
---|---|---|
Quicktour of the library | A presentation of the various APIs in Transformers | |
Summary of the tasks | How to run the models of the Transformers library task by task | |
Preprocessing data | How to use a tokenizer to preprocess your data | |
Fine-tuning a pretrained model | How to use the Trainer to fine-tune a pretrained model | |
Summary of the tokenizers | The differences between the tokenizers algorithm | |
Multilingual models | How to use the multilingual models of the library | |
Fine-tuning with custom datasets | How to fine-tune a pretrained model on various tasks |
PyTorch Examples
Notebook | Description | |
---|---|---|
Train your tokenizer | How to train and use your very own tokenizer | |
Train your language model | How to easily start using transformers | |
How to fine-tune a model on text classification | Show how to preprocess the data and fine-tune a pretrained model on any GLUE task. | |
How to fine-tune a model on language modeling | Show how to preprocess the data and fine-tune a pretrained model on a causal or masked LM task. | |
How to fine-tune a model on token classification | Show how to preprocess the data and fine-tune a pretrained model on a token classification task (NER, PoS). | |
How to fine-tune a model on question answering | Show how to preprocess the data and fine-tune a pretrained model on SQUAD. | |
How to fine-tune a model on multiple choice | Show how to preprocess the data and fine-tune a pretrained model on SWAG. | |
How to fine-tune a model on translation | Show how to preprocess the data and fine-tune a pretrained model on WMT. | |
How to fine-tune a model on summarization | Show how to preprocess the data and fine-tune a pretrained model on XSUM. | |
How to fine-tune a speech recognition model in English | Show how to preprocess the data and fine-tune a pretrained Speech model on TIMIT | |
How to fine-tune a speech recognition model in any language | Show how to preprocess the data and fine-tune a multi-lingually pretrained speech model on Common Voice | |
How to fine-tune a model on audio classification | Show how to preprocess the data and fine-tune a pretrained Speech model on Keyword Spotting | |
How to train a language model from scratch | Highlight all the steps to effectively train Transformer model on custom data | |
How to generate text | How to use different decoding methods for language generation with transformers | |
How to export model to ONNX | Highlight how to export and run inference workloads through ONNX | |
How to use Benchmarks | How to benchmark models with transformers | |
Reformer | How Reformer pushes the limits of language modeling |
TensorFlow Examples
Notebook | Description | |
---|---|---|
Train your tokenizer | How to train and use your very own tokenizer | |
Train your language model | How to easily start using transformers | |
How to fine-tune a model on text classification | Show how to preprocess the data and fine-tune a pretrained model on any GLUE task. | |
How to fine-tune a model on language modeling | Show how to preprocess the data and fine-tune a pretrained model on a causal or masked LM task. | |
How to fine-tune a model on token classification | Show how to preprocess the data and fine-tune a pretrained model on a token classification task (NER, PoS). | |
How to fine-tune a model on question answering | Show how to preprocess the data and fine-tune a pretrained model on SQUAD. | |
How to fine-tune a model on multiple choice | Show how to preprocess the data and fine-tune a pretrained model on SWAG. | |
How to fine-tune a model on translation | Show how to preprocess the data and fine-tune a pretrained model on WMT. | |
How to fine-tune a model on summarization | Show how to preprocess the data and fine-tune a pretrained model on XSUM. |
Optimum notebooks
🤗 Optimum is an extension of 🤗 Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on targeted hardwares.
Notebook | Description | |
---|---|---|
How to quantize a model for text classification | Show how to apply Intel Neural Compressor (INC) quantization on a model for any GLUE task. |
Community notebooks:
More notebooks developed by the community are available here.