nlp - Understanding byte-pair encoding tokenization for Greek characters - Stack Overflow

I am trying to train a new tokenizer with Greek text to later add the new tokens into the Llama 3.1 tok

I am trying to train a new tokenizer with Greek text to later add the new tokens into the Llama 3.1 tokenizer using

tokenizer.add_tokens(list(new_tokens)).

However, upon training the byte-pair encoding tokenizer on Greek and Spanish text, the result looks something like this:

\['Translate', 'Ġfrom', 'ĠGreek', 'Ġto', 'ĠSpanish', ':', 'ĠÎĿα', 'ĠÎŃÏĥι', 'Ġογί', 'ĠγÎŃÏģοÏħ'\]

When extending the token vocabulary in the tokenizer, it seems that those encoded tokens are being passed literally, not as encodings of Greek characters, and they are not recognized by the tokenizer to encode a sentence. However, when using the same method and new tokens are hardcoded, such as in

extender_tokenizer.add_tokens(['Αυτό', 'είναι'])

it does work.

I assume this is an encoding issue or it is related to BPE inner workings. Why are Greek characters shown that way? Is it related to encoding, BPE or both? How to obtain a list of Greek character tokens that can be added to the tokenizer?

Reference code:

from tokenizers import Tokenizer, models, trainers, pre_tokenizers

tokenizer = Tokenizer(models.BPE())
tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel()
trainer = trainers.BpeTrainer(vocab_size = 2000, min_frequency = 3, show_progress = True)
tokenizer.train_from_iterator(training_corpus, trainer = trainer)

I am trying to train a new tokenizer with Greek text to later add the new tokens into the Llama 3.1 tokenizer using

tokenizer.add_tokens(list(new_tokens)).

However, upon training the byte-pair encoding tokenizer on Greek and Spanish text, the result looks something like this:

\['Translate', 'Ġfrom', 'ĠGreek', 'Ġto', 'ĠSpanish', ':', 'ĠÎĿα', 'ĠÎŃÏĥι', 'Ġογί', 'ĠγÎŃÏģοÏħ'\]

When extending the token vocabulary in the tokenizer, it seems that those encoded tokens are being passed literally, not as encodings of Greek characters, and they are not recognized by the tokenizer to encode a sentence. However, when using the same method and new tokens are hardcoded, such as in

extender_tokenizer.add_tokens(['Αυτό', 'είναι'])

it does work.

I assume this is an encoding issue or it is related to BPE inner workings. Why are Greek characters shown that way? Is it related to encoding, BPE or both? How to obtain a list of Greek character tokens that can be added to the tokenizer?

Reference code:

from tokenizers import Tokenizer, models, trainers, pre_tokenizers

tokenizer = Tokenizer(models.BPE())
tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel()
trainer = trainers.BpeTrainer(vocab_size = 2000, min_frequency = 3, show_progress = True)
tokenizer.train_from_iterator(training_corpus, trainer = trainer)
Share Improve this question asked Nov 19, 2024 at 8:31 AnmovaAnmova 1 3
  • 1 Use tokenizers.UnicodeScripts instead of ByteLevel. Disrupting the single bytes of a multibyte character will create chaos. (Recognizing Unicode, UTF-16 or more likely UTF-8 is an other subject.) – Joop Eggen Commented Nov 19, 2024 at 8:40
  • This solved the tokenization problem and characters could be added to the base tokenizer. Thank you! – Anmova Commented Nov 19, 2024 at 12:03
  • If this solved all, write your own answer and accept it, so the question gets closed. Thanks. Love NLP. – Joop Eggen Commented Nov 19, 2024 at 15:49
Add a comment  | 

1 Answer 1

Reset to default 0

As Joop Eggen noted, using tokenizers.UnicodeScripts instead of ByteLevel solved the issue.

发布者:admin,转转请注明出处:http://www.yc00.com/questions/1745575006a4633909.html

相关推荐

发表回复

评论列表(0条)

  • 暂无评论

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信