XLM-V: A New Method of Multilingual Masked Language Models That Attempts to Address the Problem of Vocabulary Bottleneck

The issue raised by the article entitled “XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models” is that when language models’ parameters and depth increase, their vocabulary sizes remain unchanged. For instance, the mT5 model has 13B parameters but a 250K-word vocabulary that supports more than 100 languages. Thus, each language has approximately 2,500 unique tokens, which is obviously a very small number.

XLM-V: A new method of Multilingual Masked Language Models that attempts to address the problem of vocabulary bottleneck
@ Midjourney / Shalv

What action do the authors take? They start training a new model with 1 million tokens from the vocabulary in an unexpected way. XLM-R previously existed, however, with this upgrade, it will become XLM-V. The writers were determined to see what kind of improvement they could make with such a significant increase in tokens.

What about XLM-V is new that XLM-R did not?

What about XLM-V is new that XLM-R did not?

The Improving Multilingual Models with Language-Clustered Vocabularies method is used to construct lexical representation vectors for each language as follows: for each language in the set of languages, they make up a binary vector, each element of which is a specific word in the language. One indicates that the word is included in the language’s dictionary (you can view an image with a graphic description in the attachments.) However, by creating a vector utilizing the negative logarithmic probability of occurrence of each lexeme, the authors enhance how references are made.

  1. The vectors are grouped after that. Additionally, a sentencepiece model is trained on each particular cluster to stop the transfer of vocabulary between lexically unrelated languages.
  2. The ALP assesses a dictionary’s capacity to represent a specific language.
  3. Utilizing the algorithm for creating ULM dictionaries is the following step. which begins with a big initial dictionary and incrementally trims it down till the number of tokens is below a certain threshold for dictionary size.

Read more about AI:

(function(d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s); = id;
js.src = “//”;
fjs.parentNode.insertBefore(js, fjs);
}(document, ‘script’, ‘facebook-jssdk’));

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
%d bloggers like this: