transformers/examples
Stas Bekman bfd5e370a7
[CI] generate separate report files as artifacts (#7995)
* better reports

* a whole bunch of reports in their own files

* clean up

* improvements

* github artifacts experiment

* style

* complete the report generator with multiple improvements/fixes

* fix

* save all reports under one dir to easy upload

* can remove temp failing tests

* doc fix

* some cleanup
2020-10-27 09:25:07 -04:00
..
adversarial Black 20 release 2020-08-26 17:20:22 +02:00
benchmarking [Benchmarks] Change all args to from `no_...` to their positive form (#7075) 2020-09-23 13:25:24 -04:00
bert-loses-patience [logging] remove no longer needed verbosity override (#7100) 2020-09-15 04:01:14 -04:00
bertology Black 20 release 2020-08-26 17:20:22 +02:00
contrib Transformer-XL: Remove unused parameters (#7087) 2020-09-17 06:10:34 -04:00
deebert [logging] remove no longer needed verbosity override (#7100) 2020-09-15 04:01:14 -04:00
distillation update version for scipy (#7998) 2020-10-26 08:56:56 -04:00
language-modeling Update README.md (#8050) 2020-10-26 12:00:18 -04:00
longform-qa RAG (#6813) 2020-09-22 18:29:58 +02:00
lxmert Demoing LXMERT with raw images by incorporating the FRCNN model for roi-pooled extraction and bounding-box predction on the GQA answer set. (#6986) 2020-09-14 10:07:04 -04:00
movement-pruning [logging] remove no longer needed verbosity override (#7100) 2020-09-15 04:01:14 -04:00
multiple-choice Black 20 release 2020-08-26 17:20:22 +02:00
question-answering Handling longformer model_type (#7990) 2020-10-23 10:34:06 -04:00
rag Handle the case when title is None (#7941) 2020-10-23 15:54:45 +02:00
seq2seq [Seq2Seq Trainer] Make sure padding is implemented for models without pad_token (#8043) 2020-10-26 17:28:16 +01:00
text-classification New run glue script (#7917) 2020-10-22 11:42:22 -04:00
text-generation feat: allow prefix for any generative model (#5885) 2020-09-07 03:03:45 -04:00
token-classification token-classification: update url of GermEval 2014 dataset (#6571) 2020-09-18 06:18:06 -04:00
README.md [doc] rm Azure buttons as not implemented yet 2020-09-30 17:31:08 -04:00
conftest.py [CI] generate separate report files as artifacts (#7995) 2020-10-27 09:25:07 -04:00
lightning_base.py [seq2seq testing] multigpu test run via subprocess (#7281) 2020-10-21 17:20:53 -04:00
requirements.txt [Dependencies|tokenizers] Make both SentencePiece and Tokenizers optional dependencies (#7659) 2020-10-18 20:51:24 +02:00
test_examples.py New run glue script (#7917) 2020-10-22 11:42:22 -04:00
test_xla_examples.py New run glue script (#7917) 2020-10-22 11:42:22 -04:00
xla_spawn.py [TPU] Doc, fix xla_spawn.py, only preprocess dataset once (#4223) 2020-05-08 14:10:05 -04:00

README.md

Examples

Version 2.9 of 🤗 Transformers introduces a new Trainer class for PyTorch, and its equivalent TFTrainer for TF 2. Running the examples requires PyTorch 1.3.1+ or TensorFlow 2.2+.

Here is the list of all our examples:

  • grouped by task (all official examples work for multiple models)
  • with information on whether they are built on top of Trainer/TFTrainer (if not, they still work, they might just lack some features),
  • whether they also include examples for pytorch-lightning, which is a great fully-featured, general-purpose training library for PyTorch,
  • links to Colab notebooks to walk through the scripts and run them easily,
  • links to Cloud deployments to be able to deploy large-scale trainings in the Cloud with little to no setup.

This is still a work-in-progress in particular documentation is still sparse so please contribute improvements/pull requests.

The Big Table of Tasks

Task Example datasets Trainer support TFTrainer support pytorch-lightning Colab
language-modeling Raw text - - Open In Colab
text-classification GLUE, XNLI Open In Colab
token-classification CoNLL NER -
multiple-choice SWAG, RACE, ARC - Open In Colab
question-answering SQuAD - -
text-generation - n/a n/a n/a Open In Colab
distillation All - - - -
summarization CNN/Daily Mail - -
translation WMT - -
bertology - - - - -
adversarial HANS - - -

Important note

Important To make sure you can successfully run the latest versions of the example scripts, you have to install the library from source and install some example-specific requirements. Execute the following steps in a new virtual environment:

git clone https://github.com/huggingface/transformers
cd transformers
pip install .
pip install -r ./examples/requirements.txt

One-click Deploy to Cloud (wip)

Coming soon!

Running on TPUs

When using Tensorflow, TPUs are supported out of the box as a tf.distribute.Strategy.

When using PyTorch, we support TPUs thanks to pytorch/xla. For more context and information on how to setup your TPU environment refer to Google's documentation and to the very detailed pytorch/xla README.

In this repo, we provide a very simple launcher script named xla_spawn.py that lets you run our example scripts on multiple TPU cores without any boilerplate. Just pass a --num_cores flag to this script, then your regular training script with its arguments (this is similar to the torch.distributed.launch helper for torch.distributed).

For example for run_glue:

python examples/xla_spawn.py --num_cores 8 \
	examples/text-classification/run_glue.py
	--model_name_or_path bert-base-cased \
	--task_name mnli \
	--data_dir ./data/glue_data/MNLI \
	--output_dir ./models/tpu \
	--overwrite_output_dir \
	--do_train \
	--do_eval \
	--num_train_epochs 1 \
	--save_steps 20000

Feedback and more use cases and benchmarks involving TPUs are welcome, please share with the community.

Logging & Experiment tracking

You can easily log and monitor your runs code. The following are currently supported:

Weights & Biases

To use Weights & Biases, install the wandb package with:

pip install wandb

Then log in the command line:

wandb login

If you are in Jupyter or Colab, you should login with:

import wandb
wandb.login()

Whenever you use Trainer or TFTrainer classes, your losses, evaluation metrics, model topology and gradients (for Trainer only) will automatically be logged.

When using 🤗 Transformers with PyTorch Lightning, runs can be tracked through WandbLogger. Refer to related documentation & examples.

Comet.ml

To use comet_ml, install the Python package with:

pip install comet_ml

or if in a Conda environment:

conda install -c comet_ml -c anaconda -c conda-forge comet_ml