transformers/examples/pytorch
Zach Mueller 7685d93942 🚨🚨🚨Deprecate `evaluation_strategy` to `eval_strategy`🚨🚨🚨 (#30190)
* Alias

* Note alias

* Tests and src

* Rest

* Clean

* Change typing?

* Fix tests

* Deprecation versions
2024-04-23 15:08:17 +02:00
..
audio-classification 🚨🚨🚨Deprecate `evaluation_strategy` to `eval_strategy`🚨🚨🚨 (#30190) 2024-04-23 15:08:17 +02:00
contrastive-image-text Dev version 2024-04-23 15:08:15 +02:00
image-classification 🚨🚨🚨Deprecate `evaluation_strategy` to `eval_strategy`🚨🚨🚨 (#30190) 2024-04-23 15:08:17 +02:00
image-pretraining 🚨🚨🚨Deprecate `evaluation_strategy` to `eval_strategy`🚨🚨🚨 (#30190) 2024-04-23 15:08:17 +02:00
language-modeling Dev version 2024-04-23 15:08:15 +02:00
multiple-choice Dev version 2024-04-23 15:08:15 +02:00
question-answering Dev version 2024-04-23 15:08:15 +02:00
semantic-segmentation 🚨🚨🚨Deprecate `evaluation_strategy` to `eval_strategy`🚨🚨🚨 (#30190) 2024-04-23 15:08:17 +02:00
speech-pretraining Update legacy Repository usage in various example files (#29085) 2024-03-12 13:20:49 +00:00
speech-recognition 🚨🚨🚨Deprecate `evaluation_strategy` to `eval_strategy`🚨🚨🚨 (#30190) 2024-04-23 15:08:17 +02:00
summarization Dev version 2024-04-23 15:08:15 +02:00
text-classification Dev version 2024-04-23 15:08:15 +02:00
text-generation Update all references to canonical models (#29001) 2024-02-16 08:16:58 +01:00
token-classification Dev version 2024-04-23 15:08:15 +02:00
translation Dev version 2024-04-23 15:08:15 +02:00
README.md 🚨🚨🚨Deprecate `evaluation_strategy` to `eval_strategy`🚨🚨🚨 (#30190) 2024-04-23 15:08:17 +02:00
_tests_requirements.txt Fix and simplify semantic-segmentation example (#30145) 2024-04-10 09:10:52 +01:00
conftest.py improve dev setup comments and hints (#28495) 2024-01-15 18:36:40 +00:00
old_test_xla_examples.py Make torch xla available on GPU (#29334) 2024-03-11 14:07:16 +00:00
test_accelerate_examples.py Update all references to canonical models (#29001) 2024-02-16 08:16:58 +01:00
test_pytorch_examples.py Fix test `ExamplesTests::test_run_translation` (#30281) 2024-04-23 15:08:11 +02:00
xla_spawn.py Black preview (#17217) 2022-05-12 16:25:55 -04:00

README.md

Examples

This folder contains actively maintained examples of use of 🤗 Transformers using the PyTorch backend, organized by ML task.

The Big Table of Tasks

Here is the list of all our examples:

  • with information on whether they are built on top of Trainer (if not, they still work, they might just lack some features),
  • whether or not they have a version using the 🤗 Accelerate library.
  • whether or not they leverage the 🤗 Datasets library.
  • links to Colab notebooks to walk through the scripts and run them easily,
Task Example datasets Trainer support 🤗 Accelerate 🤗 Datasets Colab
language-modeling WikiText-2 Open In Colab
multiple-choice SWAG Open In Colab
question-answering SQuAD Open In Colab
summarization XSum Open In Colab
text-classification GLUE Open In Colab
text-generation - n/a - - Open In Colab
token-classification CoNLL NER Open In Colab
translation WMT Open In Colab
speech-recognition TIMIT - Open In Colab
multi-lingual speech-recognition Common Voice - Open In Colab
audio-classification SUPERB KS - Open In Colab
image-pretraining ImageNet-1k - /
image-classification CIFAR-10 Open In Colab
semantic-segmentation SCENE_PARSE_150 Open In Colab

Running quick tests

Most examples are equipped with a mechanism to truncate the number of dataset samples to the desired length. This is useful for debugging purposes, for example to quickly check that all stages of the programs can complete, before running the same setup on the full dataset which may take hours to complete.

For example here is how to truncate all three splits to just 50 samples each:

examples/pytorch/token-classification/run_ner.py \
--max_train_samples 50 \
--max_eval_samples 50 \
--max_predict_samples 50 \
[...]

Most example scripts should have the first two command line arguments and some have the third one. You can quickly check if a given example supports any of these by passing a -h option, e.g.:

examples/pytorch/token-classification/run_ner.py -h

Resuming training

You can resume training from a previous checkpoint like this:

  1. Pass --output_dir previous_output_dir without --overwrite_output_dir to resume training from the latest checkpoint in output_dir (what you would use if the training was interrupted, for instance).
  2. Pass --resume_from_checkpoint path_to_a_specific_checkpoint to resume training from that checkpoint folder.

Should you want to turn an example into a notebook where you'd no longer have access to the command line, 🤗 Trainer supports resuming from a checkpoint via trainer.train(resume_from_checkpoint).

  1. If resume_from_checkpoint is True it will look for the last checkpoint in the value of output_dir passed via TrainingArguments.
  2. If resume_from_checkpoint is a path to a specific checkpoint it will use that saved checkpoint folder to resume the training from.

Upload the trained/fine-tuned model to the Hub

All the example scripts support automatic upload of your final model to the Model Hub by adding a --push_to_hub argument. It will then create a repository with your username slash the name of the folder you are using as output_dir. For instance, "sgugger/test-mrpc" if your username is sgugger and you are working in the folder ~/tmp/test-mrpc.

To specify a given repository name, use the --hub_model_id argument. You will need to specify the whole repository name (including your username), for instance --hub_model_id sgugger/finetuned-bert-mrpc. To upload to an organization you are a member of, just use the name of that organization instead of your username: --hub_model_id huggingface/finetuned-bert-mrpc.

A few notes on this integration:

  • you will need to be logged in to the Hugging Face website locally for it to work, the easiest way to achieve this is to run huggingface-cli login and then type your username and password when prompted. You can also pass along your authentication token with the --hub_token argument.
  • the output_dir you pick will either need to be a new folder or a local clone of the distant repository you are using.

Distributed training and mixed precision

All the PyTorch scripts mentioned above work out of the box with distributed training and mixed precision, thanks to the Trainer API. To launch one of them on n GPUs, use the following command:

torchrun \
    --nproc_per_node number_of_gpu_you_have path_to_script.py \
	--all_arguments_of_the_script

As an example, here is how you would fine-tune the BERT large model (with whole word masking) on the text classification MNLI task using the run_glue script, with 8 GPUs:

torchrun \
    --nproc_per_node 8 pytorch/text-classification/run_glue.py \
    --model_name_or_path google-bert/bert-large-uncased-whole-word-masking \
    --task_name mnli \
    --do_train \
    --do_eval \
    --max_seq_length 128 \
    --per_device_train_batch_size 8 \
    --learning_rate 2e-5 \
    --num_train_epochs 3.0 \
    --output_dir /tmp/mnli_output/

If you have a GPU with mixed precision capabilities (architecture Pascal or more recent), you can use mixed precision training with PyTorch 1.6.0 or latest, or by installing the Apex library for previous versions. Just add the flag --fp16 to your command launching one of the scripts mentioned above!

Using mixed precision training usually results in 2x-speedup for training with the same final results (as shown in this table for text classification).

Running on TPUs

When using Tensorflow, TPUs are supported out of the box as a tf.distribute.Strategy.

When using PyTorch, we support TPUs thanks to pytorch/xla. For more context and information on how to setup your TPU environment refer to Google's documentation and to the very detailed pytorch/xla README.

In this repo, we provide a very simple launcher script named xla_spawn.py that lets you run our example scripts on multiple TPU cores without any boilerplate. Just pass a --num_cores flag to this script, then your regular training script with its arguments (this is similar to the torch.distributed.launch helper for torch.distributed):

python xla_spawn.py --num_cores num_tpu_you_have \
    path_to_script.py \
	--all_arguments_of_the_script

As an example, here is how you would fine-tune the BERT large model (with whole word masking) on the text classification MNLI task using the run_glue script, with 8 TPUs (from this folder):

python xla_spawn.py --num_cores 8 \
    text-classification/run_glue.py \
    --model_name_or_path google-bert/bert-large-uncased-whole-word-masking \
    --task_name mnli \
    --do_train \
    --do_eval \
    --max_seq_length 128 \
    --per_device_train_batch_size 8 \
    --learning_rate 2e-5 \
    --num_train_epochs 3.0 \
    --output_dir /tmp/mnli_output/

Using Accelerate

Most PyTorch example scripts have a version using the 🤗 Accelerate library that exposes the training loop so it's easy for you to customize or tweak them to your needs. They all require you to install accelerate with the latest development version

pip install git+https://github.com/huggingface/accelerate

Then you can easily launch any of the scripts by running

accelerate config

and reply to the questions asked. Then

accelerate test

that will check everything is ready for training. Finally, you can launch training with

accelerate launch path_to_script.py --args_to_script

Logging & Experiment tracking

You can easily log and monitor your runs code. The following are currently supported:

Weights & Biases

To use Weights & Biases, install the wandb package with:

pip install wandb

Then log in the command line:

wandb login

If you are in Jupyter or Colab, you should login with:

import wandb
wandb.login()

To enable logging to W&B, include "wandb" in the report_to of your TrainingArguments or script. Or just pass along --report_to_all if you have wandb installed.

Whenever you use the Trainer class, your losses, evaluation metrics, model topology and gradients will automatically be logged.

Advanced configuration is possible by setting environment variables:

Environment Variable Value
WANDB_LOG_MODEL Log the model as artifact (log the model as artifact at the end of training) (false by default)
WANDB_WATCH one of gradients (default) to log histograms of gradients, all to log histograms of both gradients and parameters, or false for no histogram logging
WANDB_PROJECT Organize runs by project

Set run names with run_name argument present in scripts or as part of TrainingArguments.

Additional configuration options are available through generic wandb environment variables.

Refer to related documentation & examples.

Comet.ml

To use comet_ml, install the Python package with:

pip install comet_ml

or if in a Conda environment:

conda install -c comet_ml -c anaconda -c conda-forge comet_ml

Neptune

First, install the Neptune client library. You can do it with either pip or conda:

pip:

pip install neptune

conda:

conda install -c conda-forge neptune

Next, in your model training script, import NeptuneCallback:

from transformers.integrations import NeptuneCallback

To enable Neptune logging, in your TrainingArguments, set the report_to argument to "neptune":

training_args = TrainingArguments(
    "quick-training-distilbert-mrpc",
    eval_strategy="steps",
    eval_steps=20,
    report_to="neptune",
)

trainer = Trainer(
    model,
    training_args,
    ...
)

Note: This method requires saving your Neptune credentials as environment variables (see the bottom of the section).

Alternatively, for more logging options, create a Neptune callback:

neptune_callback = NeptuneCallback()

To add more detail to the tracked run, you can supply optional arguments to NeptuneCallback.

Some examples:

neptune_callback = NeptuneCallback(
    name = "DistilBERT",
    description = "DistilBERT fine-tuned on GLUE/MRPC",
    tags = ["args-callback", "fine-tune", "MRPC"],  # tags help you manage runs in Neptune
    base_namespace="callback",  # the default is "finetuning"
    log_checkpoints = "best",  # other options are "last", "same", and None
    capture_hardware_metrics = False,  # additional keyword arguments for a Neptune run
)

Pass the callback to the Trainer:

training_args = TrainingArguments(..., report_to=None)
trainer = Trainer(
    model,
    training_args,
    ...
    callbacks=[neptune_callback],
)

Now, when you start the training with trainer.train(), your metadata will be logged in Neptune.

Note: Although you can pass your Neptune API token and project name as arguments when creating the callback, the recommended way is to save them as environment variables:

Environment variable Value
NEPTUNE_API_TOKEN Your Neptune API token. To find and copy it, click your Neptune avatar and select Get your API token.
NEPTUNE_PROJECT The full name of your Neptune project (workspace-name/project-name). To find and copy it, head to project settingsProperties.

For detailed instructions and examples, see the Neptune docs.

ClearML

To use ClearML, install the clearml package with:

pip install clearml

Then create new credentials from the ClearML Server. You can get a free hosted server here or self-host your own! After creating your new credentials, you can either copy the local snippet which you can paste after running:

clearml-init

Or you can copy the jupyter snippet if you are in Jupyter or Colab:

%env CLEARML_WEB_HOST=https://app.clear.ml
%env CLEARML_API_HOST=https://api.clear.ml
%env CLEARML_FILES_HOST=https://files.clear.ml
%env CLEARML_API_ACCESS_KEY=***
%env CLEARML_API_SECRET_KEY=***

To enable logging to ClearML, include "clearml" in the report_to of your TrainingArguments or script. Or just pass along --report_to all if you have clearml already installed.

Advanced configuration is possible by setting environment variables:

Environment Variable Value
CLEARML_PROJECT Name of the project in ClearML. (default: "HuggingFace Transformers")
CLEARML_TASK Name of the task in ClearML. (default: "Trainer")

Additional configuration options are available through generic clearml environment variables.