transformers/examples/pytorch/speech-recognition
Klaus Hipp 721ee783ca
[Docs] Fix spelling and grammar mistakes (#28825)
* Fix typos and grammar mistakes in docs and examples

* Fix typos in docstrings and comments

* Fix spelling of `tokenizer` in model tests

* Remove erroneous spaces in decorators

* Remove extra spaces in Markdown link texts
2024-02-02 08:45:00 +01:00
..
README.md [Docs] Fix spelling and grammar mistakes (#28825) 2024-02-02 08:45:00 +01:00
requirements.txt Add evaluate to examples requirements (#18666) 2022-08-18 10:57:39 -04:00
run_speech_recognition_ctc.py [docs] fix some bugs about parameter description (#28806) 2024-02-01 16:59:29 +00:00
run_speech_recognition_ctc_adapter.py [Docs] Fix spelling and grammar mistakes (#28825) 2024-02-02 08:45:00 +01:00
run_speech_recognition_seq2seq.py [docs] fix some bugs about parameter description (#28806) 2024-02-01 16:59:29 +00:00

README.md

Automatic Speech Recognition Examples

Table of Contents

Connectionist Temporal Classification

The script run_speech_recognition_ctc.py can be used to fine-tune any pretrained Connectionist Temporal Classification Model for automatic speech recognition on one of the official speech recognition datasets or a custom dataset.

Speech recognition models that have been pretrained in unsupervised fashion on audio data alone, e.g. Wav2Vec2, HuBERT, XLSR-Wav2Vec2, have shown to require only very little annotated data to yield good performance on automatic speech recognition datasets.

In the script [run_speech_recognition_ctc], we first create a vocabulary from all unique characters of both the training data and evaluation data. Then, we preprocesses the speech recognition dataset, which includes correct resampling, normalization and padding. Finally, the pretrained speech recognition model is fine-tuned on the annotated speech recognition datasets using CTC loss.


NOTE

If you encounter problems with data preprocessing by setting --preprocessing_num_workers > 1, you might want to set the environment variable OMP_NUM_THREADS to 1 as follows:

OMP_NUM_THREADS=1 python run_speech_recognition_ctc ...

If the environment variable is not set, the training script might freeze, i.e. see: https://github.com/pytorch/audio/issues/1021#issuecomment-726915239


Single GPU CTC

The following command shows how to fine-tune XLSR-Wav2Vec2 on Common Voice using a single GPU in half-precision.

python run_speech_recognition_ctc.py \
	--dataset_name="common_voice" \
	--model_name_or_path="facebook/wav2vec2-large-xlsr-53" \
	--dataset_config_name="tr" \
	--output_dir="./wav2vec2-common_voice-tr-demo" \
	--overwrite_output_dir \
	--num_train_epochs="15" \
	--per_device_train_batch_size="16" \
	--gradient_accumulation_steps="2" \
	--learning_rate="3e-4" \
	--warmup_steps="500" \
	--evaluation_strategy="steps" \
	--text_column_name="sentence" \
	--length_column_name="input_length" \
	--save_steps="400" \
	--eval_steps="100" \
	--layerdrop="0.0" \
	--save_total_limit="3" \
	--freeze_feature_encoder \
	--gradient_checkpointing \
	--chars_to_ignore , ? . ! - \; \: \" “ % <20> \
	--fp16 \
	--group_by_length \
	--push_to_hub \
	--do_train --do_eval 

On a single V100 GPU, this script should run in ca. 1 hour 20 minutes and yield a CTC loss of 0.39 and word error rate of 0.35.

Multi GPU CTC

The following command shows how to fine-tune XLSR-Wav2Vec2 on Common Voice using 8 GPUs in half-precision.

torchrun \
	--nproc_per_node 8 run_speech_recognition_ctc.py \
	--dataset_name="common_voice" \
	--model_name_or_path="facebook/wav2vec2-large-xlsr-53" \
	--dataset_config_name="tr" \
	--output_dir="./wav2vec2-common_voice-tr-demo-dist" \
	--overwrite_output_dir \
	--num_train_epochs="15" \
	--per_device_train_batch_size="4" \
	--learning_rate="3e-4" \
	--warmup_steps="500" \
	--evaluation_strategy="steps" \
	--text_column_name="sentence" \
	--length_column_name="input_length" \
	--save_steps="400" \
	--eval_steps="100" \
	--logging_steps="1" \
	--layerdrop="0.0" \
	--save_total_limit="3" \
	--freeze_feature_encoder \
	--gradient_checkpointing \
	--chars_to_ignore , ? . ! - \; \: \" “ % <20> \
	--fp16 \
	--group_by_length \
	--push_to_hub \
	--do_train --do_eval

On 8 V100 GPUs, this script should run in ca. 18 minutes and yield a CTC loss of 0.39 and word error rate of 0.36.

Multi GPU CTC with Dataset Streaming

The following command shows how to use Dataset Streaming mode to fine-tune XLS-R on Common Voice using 4 GPUs in half-precision.

Streaming mode imposes several constraints on training:

  1. We need to construct a tokenizer beforehand and define it via --tokenizer_name_or_path.
  2. --num_train_epochs has to be replaced by --max_steps. Similarly, all other epoch-based arguments have to be replaced by step-based ones.
  3. Full dataset shuffling on each epoch is not possible, since we don't have the whole dataset available at once. However, the --shuffle_buffer_size argument controls how many examples we can pre-download before shuffling them.
**torchrun \
	--nproc_per_node 4 run_speech_recognition_ctc_streaming.py \
	--dataset_name="common_voice" \
	--model_name_or_path="facebook/wav2vec2-xls-r-300m" \
	--tokenizer_name_or_path="anton-l/wav2vec2-tokenizer-turkish" \
	--dataset_config_name="tr" \
	--train_split_name="train+validation" \
	--eval_split_name="test" \
	--output_dir="wav2vec2-xls-r-common_voice-tr-ft" \
	--overwrite_output_dir \
	--max_steps="5000" \
	--per_device_train_batch_size="8" \
	--gradient_accumulation_steps="2" \
	--learning_rate="5e-4" \
	--warmup_steps="500" \
	--evaluation_strategy="steps" \
	--text_column_name="sentence" \
	--save_steps="500" \
	--eval_steps="500" \
	--logging_steps="1" \
	--layerdrop="0.0" \
	--eval_metrics wer cer \
	--save_total_limit="1" \
	--mask_time_prob="0.3" \
	--mask_time_length="10" \
	--mask_feature_prob="0.1" \
	--mask_feature_length="64" \
	--freeze_feature_encoder \
	--chars_to_ignore , ? . ! - \; \: \" “ % <20> \
	--max_duration_in_seconds="20" \
	--shuffle_buffer_size="500" \
	--fp16 \
	--push_to_hub \
	--do_train --do_eval \
	--gradient_checkpointing**

On 4 V100 GPUs, this script should run in ca. 3h 31min and yield a CTC loss of 0.35 and word error rate of 0.29.

Examples CTC

The following tables present a couple of example runs on the most popular speech-recognition datasets. The presented performances are by no means optimal as no hyper-parameter tuning was done. Nevertheless, they can serve as a baseline to improve upon.

TIMIT CTC

Dataset Dataset Config Pretrained Model Word error rate on eval Phoneme error rate on eval GPU setup Training time Fine-tuned Model & Logs Command to reproduce
TIMIT - wav2vec2-base 0.21 - 1 GPU TITAN RTX 32min here run.sh
TIMIT - wav2vec2-base 0.21 - 1 GPU TITAN RTX 32min here run.sh
TIMIT - unispeech-large-1500h-cv 0.22 - 1 GPU TITAN RTX 35min here run.sh
TIMIT - asapp/sew-mid-100k 0.30 - 1 GPU TITAN RTX 28min here run.sh
TIMIT - ntu-spml/distilhubert 0.68 - 1 GPU TITAN RTX 26min here run.sh

Librispeech CTC

Dataset Dataset Config Pretrained Model Word error rate on eval Phoneme error rate on eval GPU setup Training time Fine-tuned Model & Logs Command to reproduce
Librispeech "clean" - "train.100" microsoft/wavlm-large 0.049 - 8 GPU V100 1h30min here run.sh
Librispeech "clean" - "train.100" microsoft/wavlm-base-plus 0.068 - 8 GPU V100 1h30min here run.sh
Librispeech "clean" - "train.100" facebook/wav2vec2-large-lv60 0.042 - 8 GPU V100 1h30min here run.sh
Librispeech "clean" - "train.100" facebook/wav2vec2-large-lv60 0.042 - 8 GPU V100 1h30min here run.sh
Librispeech "clean" - "train.100" facebook/hubert-large-ll60k 0.088 - 8 GPU V100 1h30min here run.sh
Librispeech "clean" - "train.100" asapp/sew-mid-100k 0.167 8 GPU V100 54min here run.sh

Common Voice CTC

Dataset Dataset Config Pretrained Model Word error rate on eval Phoneme error rate on eval GPU setup Training time Fine-tuned Model & Logs Command to reproduce
Common Voice "tr" facebook/wav2vec2-large-xls-r-300m - 0.099 8 GPU V100 23min here run.sh
Common Voice "it" facebook/wav2vec2-large-xls-r-300m - 0.077 8 GPU V100 23min here run.sh
Common Voice "sv-SE" facebook/wav2vec2-large-xls-r-300m - 0.099 8 GPU V100 23min here run.sh
Common Voice "tr" facebook/wav2vec2-large-xlsr-53 0.36 - 8 GPU V100 18min here run.sh
Common Voice "tr" facebook/wav2vec2-large-xlsr-53 0.31 - 8 GPU V100 1h05 here run.sh
Common Voice "tr" facebook/wav2vec2-large-xlsr-53 0.35 - 1 GPU V100 1h20min here run.sh
Common Voice "tr" facebook/wav2vec2-xls-r-300m 0.31 - 8 GPU V100 1h05 here run.sh
Common Voice "tr" facebook/wav2vec2-xls-r-1b 0.21 - 2 GPU Titan 24 GB RAM 15h10 here run.sh
Common Voice "tr" in streaming mode facebook/wav2vec2-xls-r-300m 0.29 - 4 GPU V100 3h31 here run.sh

Multilingual Librispeech CTC

Dataset Dataset Config Pretrained Model Word error rate on eval Phoneme error rate on eval GPU setup Training time Fine-tuned Model & Logs Command to reproduce
Multilingual Librispeech "german" facebook/wav2vec2-large-xlsr-53 0.13 - 1 GPU Titan 24 GB RAM 15h04 here run.sh
Multilingual Librispeech "german" facebook/wav2vec2-xls-r-300m 0.15 - 1 GPU Titan 24 GB RAM 15h04 here run.sh

Connectionist Temporal Classification With Adapters

The script run_speech_recognition_ctc_adapter.py can be used to fine-tune adapter layers for Wav2Vec2-like models like MMS for automatic speech recognition.

MMS Model

The Massive Multilingual Speech (MMS) model has been pre-trained and fine-tuned on 1000+ languages. The model makes use of adapter attention layers to fine-tune only a small part of the model on a specific language. The model already comes with fine-tuned adapter layers for 1000+ languages and can be used for inference for 1000+ languages out of the box.

However, for improved performance or more specific use cases one can re-initialize the adapter weights, freeze all other weights and fine-tune them on a specific dataset as shown in the example below.

Note that the adapter weights include low dimensional linear layers for every attention block as well as the final language model head layers.

Examples CTC Adapter

In the following we will look at how one can fine-tune adapter weights for any of the MMS CTC checkpoints in less than 1 hour.

Common Voice CTC Adapter

As in the examples above, we fine-tune on Common Voice's 6 dataset in Turkish as an example. Contrary to run_speech_recognition_ctc.py before there is a --target_language which has to be defined to state for which language or concept the adapter layers shall be trained. The adapter weights will then accordingly be called adapter.{<target_language}.safetensors.

Let's run an example script. Make sure to be logged in so that your model can be directly uploaded to the Hub.

huggingface-cli login

Now, let's run an example and upload it to the Hub under wav2vec2-common_voice-tr-mms-demo.

python run_speech_recognition_ctc.py \
	--dataset_name="common_voice" \
	--model_name_or_path="facebook/mms-1b-all" \
	--dataset_config_name="tr" \
	--output_dir="./wav2vec2-common_voice-tr-mms-demo" \
	--num_train_epochs="4" \
	--per_device_train_batch_size="32" \
	--learning_rate="1e-3" \
	--warmup_steps="100" \
	--evaluation_strategy="steps" \
	--text_column_name="sentence" \
	--length_column_name="input_length" \
	--save_steps="200" \
	--eval_steps="100" \
	--save_total_limit="3" \
  --target_language="tur" \
	--gradient_checkpointing \
	--chars_to_ignore , ? . ! - \; \: \" “ % <20> \
	--fp16 \
	--group_by_length \
	--do_train --do_eval \
  --push_to_hub

This should take less than 10 minutes on most GPUs and you should very quickly get word error rates below 27%.

For an example run, you can have a look at patrickvonplaten/wav2vec2-common_voice-tr-mms-demo.

If you'd like to train another adapter model with the same base model, you can simply re-use the same --output_dir, but make sure to pass the --output_dir folder also to --tokenizer_name_or_path so that the vocabulary is not overwritten but extended. Assuming you would like to train adapter weights on Swedish in addition to Turkish and save the adapter weights in the same model repo, you can run:

python run_speech_recognition_ctc.py \
	--dataset_name="common_voice" \
	--model_name_or_path="facebook/mms-1b-all" \
	--dataset_config_name="sw" \
	--output_dir="./wav2vec2-common_voice-tr-mms-demo" \
	--tokenizer_name_or_path="./wav2vec2-common_voice-tr-mms-demo" \
	--num_train_epochs="4" \
	--per_device_train_batch_size="32" \
	--learning_rate="1e-3" \
	--warmup_steps="100" \
	--evaluation_strategy="steps" \
	--text_column_name="sentence" \
	--length_column_name="input_length" \
	--save_steps="200" \
	--eval_steps="100" \
	--save_total_limit="3" \
  --target_language="swe" \
	--gradient_checkpointing \
	--chars_to_ignore , ? . ! - \; \: \" “ % <20> \
	--fp16 \
	--group_by_length \
	--do_train --do_eval \
  --push_to_hub

Now you should have both adapter.tur.safetensors and adapter.swe.safetensors in the model repo and you can load the respective language with:

model.load_adapter("tur")  # or "swe"

respectively.

Sequence to Sequence

The script run_speech_recognition_seq2seq.py can be used to fine-tune any Speech Sequence-to-Sequence Model for automatic speech recognition on one of the official speech recognition datasets or a custom dataset. This includes the Whisper model from OpenAI or a warm-started Speech-Encoder-Decoder Model, examples for which are included below.

Whisper Model

We can load all components of the Whisper model directly from the pretrained checkpoint, including the pretrained model weights, feature extractor and tokenizer. We simply have to specify our fine-tuning dataset and training hyperparameters.

Single GPU Whisper Training

The following example shows how to fine-tune the Whisper small checkpoint on the Hindi subset of Common Voice 11 using a single GPU device in half-precision:

python run_speech_recognition_seq2seq.py \
	--model_name_or_path="openai/whisper-small" \
	--dataset_name="mozilla-foundation/common_voice_11_0" \
	--dataset_config_name="hi" \
	--language="hindi" \
	--train_split_name="train+validation" \
	--eval_split_name="test" \
	--max_steps="5000" \
	--output_dir="./whisper-small-hi" \
	--per_device_train_batch_size="16" \
	--gradient_accumulation_steps="2" \
	--per_device_eval_batch_size="16" \
	--logging_steps="25" \
	--learning_rate="1e-5" \
	--warmup_steps="500" \
	--evaluation_strategy="steps" \
	--eval_steps="1000" \
	--save_strategy="steps" \
	--save_steps="1000" \
	--generation_max_length="225" \
	--preprocessing_num_workers="16" \
	--length_column_name="input_length" \
	--max_duration_in_seconds="30" \
	--text_column_name="sentence" \
	--freeze_feature_encoder="False" \
	--gradient_checkpointing \
	--group_by_length \
	--fp16 \
	--overwrite_output_dir \
	--do_train \
	--do_eval \
	--predict_with_generate \
	--use_auth_token

On a single V100, training should take approximately 8 hours, with a final cross-entropy loss of 1e-4 and word error rate of 32.6%.

If training on a different language, you should be sure to change the language argument. The language argument should be omitted for English speech recognition.

Multi GPU Whisper Training

The following example shows how to fine-tune the Whisper small checkpoint on the Hindi subset of Common Voice 11 using 2 GPU devices in half-precision:

torchrun \
 	--nproc_per_node 2 run_speech_recognition_seq2seq.py \
	--model_name_or_path="openai/whisper-small" \
	--dataset_name="mozilla-foundation/common_voice_11_0" \
	--dataset_config_name="hi" \
	--language="hindi" \
	--train_split_name="train+validation" \
	--eval_split_name="test" \
	--max_steps="5000" \
	--output_dir="./whisper-small-hi" \
	--per_device_train_batch_size="16" \
	--per_device_eval_batch_size="16" \
	--logging_steps="25" \
	--learning_rate="1e-5" \
	--warmup_steps="500" \
	--evaluation_strategy="steps" \
	--eval_steps="1000" \
	--save_strategy="steps" \
	--save_steps="1000" \
	--generation_max_length="225" \
	--preprocessing_num_workers="16" \
	--length_column_name="input_length" \
	--max_duration_in_seconds="30" \
	--text_column_name="sentence" \
	--freeze_feature_encoder="False" \
	--gradient_checkpointing \
	--group_by_length \
	--fp16 \
	--overwrite_output_dir \
	--do_train \
	--do_eval \
	--predict_with_generate \
	--use_auth_token

On two V100s, training should take approximately 4 hours, with a final cross-entropy loss of 1e-4 and word error rate of 32.6%.

Warm-Started Speech-Encoder-Decoder Model

A very common use case is to leverage a pretrained speech encoder model, e.g. Wav2Vec2, HuBERT or XLSR-Wav2Vec2, with a pretrained text decoder model, e.g. BART or GPT-2, to create a Speech-Encoder-Decoder Model.

By pairing a pretrained speech model with a pretrained text model, the warm-started model has prior knowledge of both the source audio and target text domains. However, the cross-attention weights between the encoder and decoder are randomly initialised. Thus, the model requires fine-tuning to learn the cross-attention weights and align the encoder mapping with that of the decoder. We can perform this very fine-tuning procedure using the example script.

As an example, let's instantiate a Wav2Vec2-2-Bart model with the SpeechEncoderDecoderModel framework. First create an empty repo on hf.co:

huggingface-cli repo create wav2vec2-2-bart-base
git clone https://huggingface.co/<your-user-name>/wav2vec2-2-bart-base
cd wav2vec2-2-bart-base

Next, run the following script inside the just cloned repo:

from transformers import SpeechEncoderDecoderModel, AutoFeatureExtractor, AutoTokenizer, Wav2Vec2Processor

# checkpoints to leverage
encoder_id = "facebook/wav2vec2-base"
decoder_id = "facebook/bart-base"

# load and save speech-encoder-decoder model
# set some hyper-parameters for training and evaluation
model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id, encoder_add_adapter=True, encoder_feat_proj_dropout=0.0, encoder_layerdrop=0.0, max_length=200, num_beams=5)
model.config.decoder_start_token_id = model.decoder.config.bos_token_id
model.config.pad_token_id = model.decoder.config.pad_token_id
model.config.eos_token_id = model.decoder.config.eos_token_id
model.save_pretrained("./")

# load and save processor
feature_extractor = AutoFeatureExtractor.from_pretrained(encoder_id)
tokenizer = AutoTokenizer.from_pretrained(decoder_id)
processor = Wav2Vec2Processor(feature_extractor, tokenizer)
processor.save_pretrained("./")

Finally, we can upload all files:

git lfs install
git add . && git commit -m "upload model files" && git push

and link the official run_speech_recognition_seq2seq.py script to the folder:

ln -s $(realpath <path/to/transformers>/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py) ./

Note that we have added a randomly initialized adapter layer to wav2vec2-base with the argument encoder_add_adapter=True. This adapter sub-samples the output sequence of wav2vec2-base along the time dimension. By default, a single output vector of wav2vec2-base has a receptive field of ca. 25ms (cf. Section 4.2 of the official Wav2Vec2 paper), which represents a little less a single character. On the other hand, BART makes use of a sentence-piece tokenizer as an input processor, so that a single hidden vector of bart-base represents ca. 4 characters. To better align the receptive field of the Wav2Vec2 output vectors with BART's hidden-states in the cross-attention mechanism, we further subsample Wav2Vec2's output by a factor of 8 by adding a convolution-based adapter.

Having warm-started the speech-encoder-decoder model under <your-user-name>/wav2vec2-2-bart, we can now fine-tune it on the task of speech recognition.

In the script [run_speech_recognition_seq2seq], we load the warm-started model, feature extractor, and tokenizer, process a speech recognition dataset, and subsequently make use of the Seq2SeqTrainer to train our system. Note that it is important to align the target transcriptions with the decoder's vocabulary. For example, the Librispeech dataset only contains capitalized letters in the transcriptions, whereas BART was pretrained mostly on normalized text. Thus, it is recommended to add the argument --do_lower_case to the fine-tuning script when using a warm-started SpeechEncoderDecoderModel. The model is fine-tuned on the standard cross-entropy language modeling loss for sequence-to-sequence (just like T5 or BART in natural language processing).


NOTE

If you encounter problems with data preprocessing by setting --preprocessing_num_workers > 1, you might want to set the environment variable OMP_NUM_THREADS to 1 as follows:

OMP_NUM_THREADS=1 python run_speech_recognition_ctc ...

If the environment variable is not set, the training script might freeze, i.e. see: https://github.com/pytorch/audio/issues/1021#issuecomment-726915239.


Single GPU Seq2Seq

The following command shows how to fine-tune XLSR-Wav2Vec2 on Common Voice using a single GPU in half-precision.

python run_speech_recognition_seq2seq.py \
	--dataset_name="librispeech_asr" \
	--model_name_or_path="./" \
	--dataset_config_name="clean" \
	--train_split_name="train.100" \
	--eval_split_name="validation" \
	--output_dir="./" \
	--preprocessing_num_workers="16" \
	--length_column_name="input_length" \
	--overwrite_output_dir \
	--num_train_epochs="5" \
	--per_device_train_batch_size="8" \
	--per_device_eval_batch_size="8" \
	--gradient_accumulation_steps="8" \
	--learning_rate="3e-4" \
	--warmup_steps="400" \
	--evaluation_strategy="steps" \
	--text_column_name="text" \
	--save_steps="400" \
	--eval_steps="400" \
	--logging_steps="10" \
	--save_total_limit="1" \
	--freeze_feature_encoder \
	--gradient_checkpointing \
	--fp16 \
	--group_by_length \
	--predict_with_generate \
	--generation_max_length="40" \
	--generation_num_beams="1" \
	--do_train --do_eval \
	--do_lower_case

On a single V100 GPU, this script should run in ca. 5 hours and yield a cross-entropy loss of 0.405 and word error rate of 0.0728.

Multi GPU Seq2Seq

The following command shows how to fine-tune XLSR-Wav2Vec2 on Common Voice using 8 GPUs in half-precision.

torchrun \
 	--nproc_per_node 8 run_speech_recognition_seq2seq.py \
	--dataset_name="librispeech_asr" \
	--model_name_or_path="./" \
	--dataset_config_name="clean" \
	--train_split_name="train.100" \
	--eval_split_name="validation" \
	--output_dir="./" \
	--preprocessing_num_workers="16" \
	--length_column_name="input_length" \
	--overwrite_output_dir \
	--num_train_epochs="5" \
	--per_device_train_batch_size="8" \
	--per_device_eval_batch_size="8" \
	--gradient_accumulation_steps="1" \
	--learning_rate="3e-4" \
	--warmup_steps="400" \
	--evaluation_strategy="steps" \
	--text_column_name="text" \
	--save_steps="400" \
	--eval_steps="400" \
	--logging_steps="10" \
	--save_total_limit="1" \
	--freeze_feature_encoder \
	--gradient_checkpointing \
	--fp16 \
	--group_by_length \
	--predict_with_generate \
	--do_train --do_eval \
	--do_lower_case

On 8 V100 GPUs, this script should run in ca. 45 minutes and yield a cross-entropy loss of 0.405 and word error rate of 0.0728

Examples Seq2Seq

Librispeech Seq2Seq

Dataset Dataset Config Pretrained Model Word error rate on eval Phoneme error rate on eval GPU setup Training time Fine-tuned Model & Logs Command to reproduce
Librispeech "clean" - "train.100" facebook/wav2vec2-base and facebook/bart-base 0.0728 - 8 GPU V100 45min here create_model.py & run.sh
Librispeech "clean" - "train.100" facebook/wav2vec2-large-lv60 and facebook/bart-large 0.0486 - 8 GPU V100 1h20min here create_model.py & run.sh