hezar.preprocessors.tokenizers.wordpiece module

class hezar.preprocessors.tokenizers.wordpiece.WordPieceConfig(max_length: int = 512, truncation_strategy: str = 'longest_first', truncation_direction: str = 'right', stride: int = 0, padding_strategy: str = 'longest', padding_direction: str = 'right', pad_to_multiple_of: int = 0, pad_token_type_id: int = 0, bos_token: 'str' = None, eos_token: 'str' = None, unk_token: str = '[UNK]', sep_token: str = '[SEP]', pad_token: str = '[PAD]', cls_token: str = '[CLS]', mask_token: str = '[MASK]', additional_special_tokens: List[str] = None, wordpieces_prefix: str = '##', vocab_size: int = 30000, min_frequency: int = 2, limit_alphabet: int = 1000, initial_alphabet: list = <factory>, show_progress: bool = True)[source]

Bases: TokenizerConfig

additional_special_tokens: List[str] = None
cls_token: str = '[CLS]'
initial_alphabet: list
limit_alphabet: int = 1000
mask_token: str = '[MASK]'
max_length: int = 512
min_frequency: int = 2
name: str = 'wordpiece_tokenizer'
pad_to_multiple_of: int = 0
pad_token: str = '[PAD]'
pad_token_type_id: int = 0
padding_direction: str = 'right'
padding_strategy: str = 'longest'
sep_token: str = '[SEP]'
show_progress: bool = True
stride: int = 0
truncation_direction: str = 'right'
truncation_strategy: str = 'longest_first'
unk_token: str = '[UNK]'
vocab_size: int = 30000
wordpieces_prefix: str = '##'
class hezar.preprocessors.tokenizers.wordpiece.WordPieceTokenizer(config, tokenizer_file=None, **kwargs)[source]

Bases: Tokenizer

A standard WordPiece tokenizer using 🤗HuggingFace Tokenizers

Parameters:
  • config – Preprocessor config for the tokenizer

  • **kwargs – Extra/manual config parameters

build()[source]

Build the tokenizer.

Returns:

The built tokenizer.

Return type:

HFTokenizer

required_backends: List[str | Backends] = [Backends.TOKENIZERS]
token_ids_name = 'token_ids'
tokenizer_config_filename = 'tokenizer_config.yaml'
tokenizer_filename = 'tokenizer.json'
train(files: List[str], **train_kwargs)[source]

Train the model using the given files

train_from_iterator(dataset: List[str], **train_kwargs)[source]

Train the model using the given files