Updated documentation for LM finetuning script
This commit is contained in:
parent
3fbf301bba
commit
7f522437bc
|
@ -459,7 +459,7 @@ The same option as in the original scripts are provided, please refer to the cod
|
||||||
|
|
||||||
|
|
||||||
Causal LM fine-tuning on GPT/GPT-2, Masked LM fine-tuning on BERT/RoBERTa
|
Causal LM fine-tuning on GPT/GPT-2, Masked LM fine-tuning on BERT/RoBERTa
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Before running the following examples you should download the `WikiText-2 dataset <https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/>`__ and unpack it to some directory `$WIKITEXT_2_DATASET`
|
Before running the following examples you should download the `WikiText-2 dataset <https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/>`__ and unpack it to some directory `$WIKITEXT_2_DATASET`
|
||||||
The following results were obtained using the `raw` WikiText-2 (no tokens were replaced before the tokenization).
|
The following results were obtained using the `raw` WikiText-2 (no tokens were replaced before the tokenization).
|
||||||
|
@ -467,6 +467,8 @@ The following results were obtained using the `raw` WikiText-2 (no tokens were r
|
||||||
This example fine-tunes GPT-2 on the WikiText-2 dataset. The loss function is a causal language modeling loss (perplexity).
|
This example fine-tunes GPT-2 on the WikiText-2 dataset. The loss function is a causal language modeling loss (perplexity).
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
|
|
||||||
export WIKITEXT_2_DATASET=/path/to/wikitext_dataset
|
export WIKITEXT_2_DATASET=/path/to/wikitext_dataset
|
||||||
|
|
||||||
python run_lm_finetuning.py
|
python run_lm_finetuning.py
|
||||||
|
@ -485,6 +487,8 @@ This example fine-tunes RoBERTa on the WikiText-2 dataset. The loss function is
|
||||||
The `--mlm` flag is necessary to fine-tune BERT/RoBERTa on masked language modeling.
|
The `--mlm` flag is necessary to fine-tune BERT/RoBERTa on masked language modeling.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
|
|
||||||
export WIKITEXT_2_DATASET=/path/to/wikitext_dataset
|
export WIKITEXT_2_DATASET=/path/to/wikitext_dataset
|
||||||
|
|
||||||
python run_lm_finetuning.py
|
python run_lm_finetuning.py
|
||||||
|
|
|
@ -2,35 +2,35 @@ DistilBERT
|
||||||
----------------------------------------------------
|
----------------------------------------------------
|
||||||
|
|
||||||
``DistilBertConfig``
|
``DistilBertConfig``
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
.. autoclass:: pytorch_transformers.DistilBertConfig
|
.. autoclass:: pytorch_transformers.DistilBertConfig
|
||||||
:members:
|
:members:
|
||||||
|
|
||||||
|
|
||||||
``DistilBertTokenizer``
|
``DistilBertTokenizer``
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
.. autoclass:: pytorch_transformers.DistilBertTokenizer
|
.. autoclass:: pytorch_transformers.DistilBertTokenizer
|
||||||
:members:
|
:members:
|
||||||
|
|
||||||
|
|
||||||
``DistilBertModel``
|
``DistilBertModel``
|
||||||
~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
.. autoclass:: pytorch_transformers.DistilBertModel
|
.. autoclass:: pytorch_transformers.DistilBertModel
|
||||||
:members:
|
:members:
|
||||||
|
|
||||||
|
|
||||||
``DistilBertForMaskedLM``
|
``DistilBertForMaskedLM``
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
.. autoclass:: pytorch_transformers.DistilBertForMaskedLM
|
.. autoclass:: pytorch_transformers.DistilBertForMaskedLM
|
||||||
:members:
|
:members:
|
||||||
|
|
||||||
|
|
||||||
``DistilBertForSequenceClassification``
|
``DistilBertForSequenceClassification``
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
.. autoclass:: pytorch_transformers.DistilBertForSequenceClassification
|
.. autoclass:: pytorch_transformers.DistilBertForSequenceClassification
|
||||||
:members:
|
:members:
|
||||||
|
|
Loading…
Reference in New Issue