52 lines
3.0 KiB
Markdown
52 lines
3.0 KiB
Markdown
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
|
|
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
|
rendered properly in your Markdown viewer.
|
|
|
|
-->
|
|
|
|
# XLM-V
|
|
|
|
## Overview
|
|
|
|
XLM-V is multilingual language model with a one million token vocabulary trained on 2.5TB of data from Common Crawl (same as XLM-R).
|
|
It was introduced in the [XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472)
|
|
paper by Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer and Madian Khabsa.
|
|
|
|
From the abstract of the XLM-V paper:
|
|
|
|
*Large multilingual language models typically rely on a single vocabulary shared across 100+ languages.
|
|
As these models have increased in parameter count and depth, vocabulary size has remained largely unchanged.
|
|
This vocabulary bottleneck limits the representational capabilities of multilingual models like XLM-R.
|
|
In this paper, we introduce a new approach for scaling to very large multilingual vocabularies by
|
|
de-emphasizing token sharing between languages with little lexical overlap and assigning vocabulary capacity
|
|
to achieve sufficient coverage for each individual language. Tokenizations using our vocabulary are typically
|
|
more semantically meaningful and shorter compared to XLM-R. Leveraging this improved vocabulary, we train XLM-V,
|
|
a multilingual language model with a one million token vocabulary. XLM-V outperforms XLM-R on every task we
|
|
tested on ranging from natural language inference (XNLI), question answering (MLQA, XQuAD, TyDiQA), and
|
|
named entity recognition (WikiAnn) to low-resource tasks (Americas NLI, MasakhaNER).*
|
|
|
|
This model was contributed by [stefan-it](https://huggingface.co/stefan-it), including detailed experiments with XLM-V on downstream tasks.
|
|
The experiments repository can be found [here](https://github.com/stefan-it/xlm-v-experiments).
|
|
|
|
## Usage tips
|
|
|
|
- XLM-V is compatible with the XLM-RoBERTa model architecture, only model weights from [`fairseq`](https://github.com/facebookresearch/fairseq)
|
|
library had to be converted.
|
|
- The `XLMTokenizer` implementation is used to load the vocab and performs tokenization.
|
|
|
|
A XLM-V (base size) model is available under the [`facebook/xlm-v-base`](https://huggingface.co/facebook/xlm-v-base) identifier.
|
|
|
|
<Tip>
|
|
|
|
XLM-V architecture is the same as XLM-RoBERTa, refer to [XLM-RoBERTa documentation](xlm-roberta) for API reference, and examples.
|
|
</Tip> |