transformers/examples/multiple-choice
Lysandre a75c64d80c Black 20 release 2020-08-26 17:20:22 +02:00
..
README.md per_device instead of per_gpu/error thrown when argument unknown (#4618) 2020-05-27 11:36:55 -04:00
run_multiple_choice.py Distributed eval: SequentialDistributedSampler + gather all results (#4243) 2020-05-18 22:02:39 -04:00
run_tf_multiple_choice.py Clean up diffs in Trainer/TFTrainer (#5417) 2020-07-01 11:00:20 -04:00
utils_multiple_choice.py Black 20 release 2020-08-26 17:20:22 +02:00

README.md

Multiple Choice

Based on the script run_multiple_choice.py.

Fine-tuning on SWAG

Download swag data

#training on 4 tesla V100(16GB) GPUS
export SWAG_DIR=/path/to/swag_data_dir
python ./examples/multiple-choice/run_multiple_choice.py \
--task_name swag \
--model_name_or_path roberta-base \
--do_train \
--do_eval \
--data_dir $SWAG_DIR \
--learning_rate 5e-5 \
--num_train_epochs 3 \
--max_seq_length 80 \
--output_dir models_bert/swag_base \
--per_gpu_eval_batch_size=16 \
--per_device_train_batch_size=16 \
--gradient_accumulation_steps 2 \
--overwrite_output

Training with the defined hyper-parameters yields the following results:

***** Eval results *****
eval_acc = 0.8338998300509847
eval_loss = 0.44457291918821606

Tensorflow

export SWAG_DIR=/path/to/swag_data_dir
python ./examples/multiple-choice/run_tf_multiple_choice.py \
--task_name swag \
--model_name_or_path bert-base-cased \
--do_train \
--do_eval \
--data_dir $SWAG_DIR \
--learning_rate 5e-5 \
--num_train_epochs 3 \
--max_seq_length 80 \
--output_dir models_bert/swag_base \
--per_gpu_eval_batch_size=16 \
--per_device_train_batch_size=16 \
--logging-dir logs \
--gradient_accumulation_steps 2 \
--overwrite_output

Run it in colab

Open In Colab