transformers/docs/source/en/model_doc/nezha.md

3.2 KiB

Nezha

Overview

The Nezha model was proposed in NEZHA: Neural Contextualized Representation for Chinese Language Understanding by Junqiu Wei et al.

The abstract from the paper is the following:

The pre-trained language models have achieved great successes in various natural language understanding (NLU) tasks due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora. In this technical report, we present our practice of pre-training language models named NEZHA (NEural contextualiZed representation for CHinese lAnguage understanding) on Chinese corpora and finetuning for the Chinese NLU tasks. The current version of NEZHA is based on BERT with a collection of proven improvements, which include Functional Relative Positional Encoding as an effective positional encoding scheme, Whole Word Masking strategy, Mixed Precision Training and the LAMB Optimizer in training the models. The experimental results show that NEZHA achieves the state-of-the-art performances when finetuned on several representative Chinese tasks, including named entity recognition (People's Daily NER), sentence matching (LCQMC), Chinese sentiment classification (ChnSenti) and natural language inference (XNLI).

This model was contributed by sijunhe. The original code can be found here.

Resources

NezhaConfig

autodoc NezhaConfig

NezhaModel

autodoc NezhaModel - forward

NezhaForPreTraining

autodoc NezhaForPreTraining - forward

NezhaForMaskedLM

autodoc NezhaForMaskedLM - forward

NezhaForNextSentencePrediction

autodoc NezhaForNextSentencePrediction - forward

NezhaForSequenceClassification

autodoc NezhaForSequenceClassification - forward

NezhaForMultipleChoice

autodoc NezhaForMultipleChoice - forward

NezhaForTokenClassification

autodoc NezhaForTokenClassification - forward

NezhaForQuestionAnswering

autodoc NezhaForQuestionAnswering - forward