40 lines
1.4 KiB
Markdown
40 lines
1.4 KiB
Markdown
<!---
|
|
Copyright 2021 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License");
|
|
you may not use this file except in compliance with the License.
|
|
You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software
|
|
distributed under the License is distributed on an "AS IS" BASIS,
|
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
See the License for the specific language governing permissions and
|
|
limitations under the License.
|
|
-->
|
|
|
|
# Summarization example
|
|
|
|
This script shows an example of training a *summarization* model with the 🤗 Transformers library.
|
|
For straightforward use-cases you may be able to use these scripts without modification, although we have also
|
|
included comments in the code to indicate areas that you may need to adapt to your own projects.
|
|
|
|
### Multi-GPU and TPU usage
|
|
|
|
By default, these scripts use a `MirroredStrategy` and will use multiple GPUs effectively if they are available. TPUs
|
|
can also be used by passing the name of the TPU resource with the `--tpu` argument.
|
|
|
|
### Example command
|
|
```
|
|
python run_summarization.py \
|
|
--model_name_or_path facebook/bart-base \
|
|
--dataset_name cnn_dailymail \
|
|
--dataset_config "3.0.0" \
|
|
--output_dir /tmp/tst-summarization \
|
|
--per_device_train_batch_size 8 \
|
|
--per_device_eval_batch_size 16 \
|
|
--num_train_epochs 3 \
|
|
--do_train \
|
|
--do_eval
|
|
``` |