transformers/model_cards/VictorSanh/roberta-base-finetuned-yelp...
Victor SANH 778e635fc9
[model_cards] roberta-base-finetuned-yelp-polarity (#6009)
* [model_cards] roberta-base-finetuned-yelp-polarity

* Update model_cards/VictorSanh/roberta-base-finetuned-yelp-polarity/README.md

Co-authored-by: Julien Chaumond <chaumond@gmail.com>

Co-authored-by: Julien Chaumond <chaumond@gmail.com>
2020-07-24 09:45:21 -04:00
..
README.md [model_cards] roberta-base-finetuned-yelp-polarity (#6009) 2020-07-24 09:45:21 -04:00

README.md

language datasets
en
yelp_polarity

RoBERTa-base-finetuned-yelp-polarity

This is a RoBERTa-base checkpoint fine-tuned on binary sentiment classifcation from Yelp polarity. It gets 98.08% accuracy on the test set.

Hyper-parameters

We used the following hyper-parameters to train the model on one GPU:

num_train_epochs            = 2.0
learning_rate               = 1e-05
weight_decay                = 0.0
adam_epsilon                = 1e-08
max_grad_norm               = 1.0
per_device_train_batch_size = 32
gradient_accumulation_steps = 1
warmup_steps                = 3500
seed                        = 42