I am trying to train a new tokenizer with Greek text to later add the new tokens into the Llama 3.1 tokenizer using
tokenizer.add_tokens(list(new_tokens)).
However, upon training the byte-pair encoding tokenizer on Greek and Spanish text, the result looks something like this:
\['Translate', 'Ġfrom', 'ĠGreek', 'Ġto', 'ĠSpanish', ':', 'ĠÎĿα', 'ĠÎŃÏĥι', 'Ġογί', 'ĠγÎŃÏģοÏħ'\]
When extending the token vocabulary in the tokenizer, it seems that those encoded tokens are being passed literally, not as encodings of Greek characters, and they are not recognized by the tokenizer to encode a sentence. However, when using the same method and new tokens are hardcoded, such as in
extender_tokenizer.add_tokens(['Αυτό', 'είναι'])
it does work.
I assume this is an encoding issue or it is related to BPE inner workings. Why are Greek characters shown that way? Is it related to encoding, BPE or both? How to obtain a list of Greek character tokens that can be added to the tokenizer?
Reference code:
from tokenizers import Tokenizer, models, trainers, pre_tokenizers
tokenizer = Tokenizer(models.BPE())
tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel()
trainer = trainers.BpeTrainer(vocab_size = 2000, min_frequency = 3, show_progress = True)
tokenizer.train_from_iterator(training_corpus, trainer = trainer)
I am trying to train a new tokenizer with Greek text to later add the new tokens into the Llama 3.1 tokenizer using
tokenizer.add_tokens(list(new_tokens)).
However, upon training the byte-pair encoding tokenizer on Greek and Spanish text, the result looks something like this:
\['Translate', 'Ġfrom', 'ĠGreek', 'Ġto', 'ĠSpanish', ':', 'ĠÎĿα', 'ĠÎŃÏĥι', 'Ġογί', 'ĠγÎŃÏģοÏħ'\]
When extending the token vocabulary in the tokenizer, it seems that those encoded tokens are being passed literally, not as encodings of Greek characters, and they are not recognized by the tokenizer to encode a sentence. However, when using the same method and new tokens are hardcoded, such as in
extender_tokenizer.add_tokens(['Αυτό', 'είναι'])
it does work.
I assume this is an encoding issue or it is related to BPE inner workings. Why are Greek characters shown that way? Is it related to encoding, BPE or both? How to obtain a list of Greek character tokens that can be added to the tokenizer?
Reference code:
from tokenizers import Tokenizer, models, trainers, pre_tokenizers
tokenizer = Tokenizer(models.BPE())
tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel()
trainer = trainers.BpeTrainer(vocab_size = 2000, min_frequency = 3, show_progress = True)
tokenizer.train_from_iterator(training_corpus, trainer = trainer)
Share
Improve this question
asked Nov 19, 2024 at 8:31
AnmovaAnmova
1
3
|
1 Answer
Reset to default 0As Joop Eggen noted, using tokenizers.UnicodeScripts
instead of ByteLevel
solved the issue.
发布者:admin,转转请注明出处:http://www.yc00.com/questions/1745575006a4633909.html
tokenizers.UnicodeScripts
instead ofByteLevel
. Disrupting the single bytes of a multibyte character will create chaos. (Recognizing Unicode, UTF-16 or more likely UTF-8 is an other subject.) – Joop Eggen Commented Nov 19, 2024 at 8:40