transformers/examples
Sylvain Gugger a5bd40b75c
Not always consider a local model a checkpoint in run_glue (#10517)
2021-03-04 11:11:39 -05:00
..
benchmarking [examples] make run scripts executable (#10037) 2021-02-05 15:51:18 -08:00
language-modeling updated logging and saving metrics (#10436) 2021-02-27 09:53:44 -08:00
legacy Move the TF NER example (#10276) 2021-02-19 16:06:13 -05:00
multiple-choice updated logging and saving metrics (#10436) 2021-02-27 09:53:44 -08:00
question-answering updated logging and saving metrics (#10436) 2021-02-27 09:53:44 -08:00
research_projects Add Fine-Tuning for Wav2Vec2 (#10145) 2021-03-01 12:13:17 +03:00
seq2seq [run_seq2seq.py] restore functionality: saving to test_generations.txt (#10428) 2021-02-27 08:21:50 -08:00
test_data/wmt_en_ro fix run_seq2seq.py; porting trainer tests to it (#10162) 2021-02-15 09:12:17 -08:00
tests Add support for ZeRO-2/3 and ZeRO-offload in fairscale (#10354) 2021-02-25 11:07:53 -05:00
text-classification Not always consider a local model a checkpoint in run_glue (#10517) 2021-03-04 11:11:39 -05:00
text-generation [examples] make run scripts executable (#10037) 2021-02-05 15:51:18 -08:00
token-classification updated logging and saving metrics (#10436) 2021-02-27 09:53:44 -08:00
README.md doc: update W&B related doc (#10086) 2021-02-09 14:47:52 -05:00
_tests_requirements.txt Reorganize examples (#9010) 2020-12-11 10:07:02 -05:00
conftest.py Reorganize examples (#9010) 2020-12-11 10:07:02 -05:00
test_examples.py New run_seq2seq script (#9605) 2021-01-19 15:22:17 -05:00
test_xla_examples.py using multi_gpu consistently (#8446) 2020-11-10 13:23:58 -05:00
xla_spawn.py Reorganize examples (#9010) 2020-12-11 10:07:02 -05:00

README.md

Examples

This folder contains actively maintained examples of use of 🤗 Transformers organized along NLP tasks. If you are looking for an example that used to be in this folder, it may have moved to our research projects subfolder (which contains frozen snapshots of research projects).

Important note

Important

To make sure you can successfully run the latest versions of the example scripts, you have to install the library from source and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:

git clone https://github.com/huggingface/transformers
cd transformers
pip install .

Then cd in the example folder of your choice and run

pip install -r requirements.txt

Alternatively, you can run the version of the examples as they were for your current version of Transformers via (for instance with v3.5.1):

git checkout tags/v3.5.1

The Big Table of Tasks

Here is the list of all our examples:

  • with information on whether they are built on top of Trainer/TFTrainer (if not, they still work, they might just lack some features),
  • whether or not they leverage the 🤗 Datasets library.
  • links to Colab notebooks to walk through the scripts and run them easily,
Task Example datasets Trainer support TFTrainer support 🤗 Datasets Colab
language-modeling Raw text - Open In Colab
multiple-choice SWAG, RACE, ARC Open In Colab
question-answering SQuAD Open In Colab
summarization CNN/Daily Mail - - -
text-classification GLUE, XNLI Open In Colab
text-generation - n/a n/a - Open In Colab
token-classification CoNLL NER Open In Colab
translation WMT - - -

Distributed training and mixed precision

All the PyTorch scripts mentioned above work out of the box with distributed training and mixed precision, thanks to the Trainer API. To launch one of them on n GPUS, use the following command:

python -m torch.distributed.launch \
    --nproc_per_node number_of_gpu_you_have path_to_script.py \
	--all_arguments_of_the_script 

As an example, here is how you would fine-tune the BERT large model (with whole word masking) on the text classification MNLI task using the run_glue script, with 8 GPUs:

python -m torch.distributed.launch \
    --nproc_per_node 8 text-classification/run_glue.py \
    --model_name_or_path bert-large-uncased-whole-word-masking \
    --task_name mnli \
    --do_train \
    --do_eval \
    --max_seq_length 128 \
    --per_device_train_batch_size 8 \
    --learning_rate 2e-5 \
    --num_train_epochs 3.0 \
    --output_dir /tmp/mnli_output/

If you have a GPU with mixed precision capabilities (architecture Pascal or more recent), you can use mixed precision training with PyTorch 1.6.0 or latest, or by installing the Apex library for previous versions. Just add the flag --fp16 to your command launching one of the scripts mentioned above!

Using mixed precision training usually results in 2x-speedup for training with the same final results (as shown in this table for text classification).

Running on TPUs

When using Tensorflow, TPUs are supported out of the box as a tf.distribute.Strategy.

When using PyTorch, we support TPUs thanks to pytorch/xla. For more context and information on how to setup your TPU environment refer to Google's documentation and to the very detailed pytorch/xla README.

In this repo, we provide a very simple launcher script named xla_spawn.py that lets you run our example scripts on multiple TPU cores without any boilerplate. Just pass a --num_cores flag to this script, then your regular training script with its arguments (this is similar to the torch.distributed.launch helper for torch.distributed):

python xla_spawn.py --num_cores num_tpu_you_have \
    path_to_script.py \
	--all_arguments_of_the_script 

As an example, here is how you would fine-tune the BERT large model (with whole word masking) on the text classification MNLI task using the run_glue script, with 8 TPUs:

python xla_spawn.py --num_cores 8 \
    text-classification/run_glue.py \
    --model_name_or_path bert-large-uncased-whole-word-masking \
    --task_name mnli \
    --do_train \
    --do_eval \
    --max_seq_length 128 \
    --per_device_train_batch_size 8 \
    --learning_rate 2e-5 \
    --num_train_epochs 3.0 \
    --output_dir /tmp/mnli_output/

Logging & Experiment tracking

You can easily log and monitor your runs code. The following are currently supported:

Weights & Biases

To use Weights & Biases, install the wandb package with:

pip install wandb

Then log in the command line:

wandb login

If you are in Jupyter or Colab, you should login with:

import wandb
wandb.login()

To enable logging to W&B, include "wandb" in the report_to of your TrainingArguments or script. Or just pass along --report_to all if you have wandb installed.

Whenever you use Trainer or TFTrainer classes, your losses, evaluation metrics, model topology and gradients (for Trainer only) will automatically be logged.

Advanced configuration is possible by setting environment variables:

Environment Variables Options
WANDB_LOG_MODEL Log the model as artifact at the end of training (false by default)
WANDB_WATCH
  • gradients (default): Log histograms of the gradients
  • all: Log histograms of gradients and parameters
  • false: No gradient or parameter logging
WANDB_PROJECT Organize runs by project

Set run names with run_name argument present in scripts or as part of TrainingArguments.

Additional configuration options are available through generic wandb environment variables.

Refer to related documentation & examples.

Comet.ml

To use comet_ml, install the Python package with:

pip install comet_ml

or if in a Conda environment:

conda install -c comet_ml -c anaconda -c conda-forge comet_ml