libai.tokenizer

class libai.tokenizer.BertTokenizer(vocab_file, do_lower_case=True, do_basic_tokenize=True, never_split=None, unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]', tokenize_chinese_chars=True, do_chinese_wwm=False, add_bos_token=False, **kwargs)[source]

Construct a BERT tokenizer. Based on WordPiece.

Parameters
  • vocab_file (str) – Path to a one-wordpiece-per-line vocabulary file.

  • do_lower_case (bool, optional, defaults to True) – Whether to lower case the input. Only has an effect when do_basic_tokenize=True.

  • do_basic_tokenize (bool, optional, defaults to True) – Whether to do basic tokenization before wordpiece.

  • never_split (Iterable, optional) – List of tokens which will never be split during tokenization. Only has an effect when do_basic_tokenize=True.

  • tokenize_chinese_chars (bool, optional, defaults to True) – Whether to tokenize Chinese characters. This should likely be deactivated for Japanese, see: https://github.com/huggingface/pytorch-pretrained-BERT/issues/328.

  • do_chinese_wwm (bool, optional, defaults to False) – Whether to do whole word masking for Chinese. Chinese sentence will be segmented by a third-party tool first. Each substr will be added ‘##’ prefix and its index will be calucated by id(##A) = id(A) + vocab_size.

property vocab_size

Size of the base vocabulary (without the added tokens).

get_vocab()[source]

Returns the vocabulary as a dictionary of token to index. tokenizer.get_vocab()[token] is equivalent to tokenizer.convert_tokens_to_ids(token) when token is in the vocab.

Returns

The vocabulary.

Return type

Dict[str, int]

convert_tokens_to_string(tokens)[source]

Converts a sequence of tokens (string) to a single string.

build_inputs_with_special_tokens(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None)List[int][source]

Add special tokens to a sequence or a pair of sequence. BERT format sentence input:

  • single sequence: [CLS] tokens_a [SEP]

  • pair of sequences: [CLS] tokens_a [SEP] tokens_b [SEP]

Parameters
  • token_ids_0 (List[int]) – The token ids of sentence 0.

  • token_ids_1 (List[int], optional) – The token ids of sentence 1. Defaults to None.

Returns

The sequence after adding special toekens.

Return type

List[str]

save_vocabulary(save_directory, filename_prefix=None)[source]

Save the tokenizer vocabulary to a directory or file.

class libai.tokenizer.RobertaTokenizer(vocab_file, merges_file, errors='replace', bos_token='<s>', eos_token='</s>', sep_token='</s>', cls_token='<s>', unk_token='<unk>', pad_token='<pad>', mask_token='<mask>', add_bos_token=False, **kwargs)[source]

Constructs a RoBERTa tokenizer, derived from the GPT-2 tokenizer, using byte-level Byte-Pair-Encoding.

Parameters
  • vocab_file (str) – Path to the vocabulary file.

  • merges_file (str) – Path to the merges file.

  • errors (str, optional, defaults to "replace") – Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information.

  • bos_token (str, optional, defaults to <s>) – The beginning of sequence token.

  • eos_token (str, optional, defaults to </s>) – The end of sequence token.

  • cls_token (str, optional, defaults to <s>) – The first token of the sequence when built with special tokens.

  • unk_token (str, optional, defaults to <unk>) – The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.

  • pad_token (str, optional, defaults to <pad>) – A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation.

  • mask_token (str, optional, defaults to <mask>) – A special token representing a masked token (used by masked-language modeling pretraining objectives, like BERT).

property vocab_size

Size of the base vocabulary (without the added tokens).

get_vocab()[source]

Returns the vocabulary as a dictionary of token to index. tokenizer.get_vocab()[token] is equivalent to tokenizer.convert_tokens_to_ids(token) when token is in the vocab.

Returns

The vocabulary.

Return type

Dict[str, int]

convert_tokens_to_string(tokens)[source]

Converts a sequence of tokens (string) in a single string.

build_inputs_with_special_tokens(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None)List[int][source]

Add special tokens to a sequence or a pair of sequence. RoBERTa format sentence input:

  • single sequence: [CLS] tokens_a [SEP]

  • pair of sequences: [CLS] tokens_a [SEP] tokens_b [SEP]

Parameters
  • token_ids_0 (List[int]) – The token ids of sentence 0.

  • token_ids_1 (List[int], optional) – The token ids of sentence 1. Defaults to None.

Returns

The sequence after adding special toekens.

Return type

List[str]

save_vocabulary(save_directory: str, filename_prefix: Optional[str] = None)Tuple[str][source]

Save the tokenizer vocabulary to a directory. This method does NOT save added tokens and special token mappings. Please use save_pretrained() to save the full Tokenizer state if you want to reload it using the from_pretrained() class method.

create_token_type_ids_from_sequences(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None)List[int][source]

Create a mask from the two sequences passed to be used in a sequence-pair classification task. RoBERTa does not make use of token type ids, therefore a list of zeros is returned.

Parameters
  • token_ids_0 (List[int]) – List of IDs.

  • token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs.

Returns

List of zeros.

Return type

List[int]

class libai.tokenizer.GPT2Tokenizer(vocab_file, merges_file, errors='replace', unk_token='<|endoftext|>', bos_token='<|endoftext|>', eos_token='<|endoftext|>', add_bos_token=False, **kwargs)[source]

Construct a GPT-2 tokenizer. Based on byte-level Byte-Pair-Encoding.

Parameters
  • vocab_file (str) – Path to the vocabulary file.

  • merges_file (str) – Path to the merges file.

  • errors (str, optional, defaults to "replace") – Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information.

  • unk_token (str, optional, defaults to <|endoftext|>) – The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.

  • bos_token (str, optional, defaults to <|endoftext|>) – The beginning of sequence token.

  • eos_token (str, optional, defaults to <|endoftext|>) – The end of sequence token.

property vocab_size

Size of the base vocabulary (without the added tokens).

get_vocab()[source]

Returns the vocabulary as a dictionary of token to index. tokenizer.get_vocab()[token] is equivalent to tokenizer.convert_tokens_to_ids(token) when token is in the vocab.

Returns

The vocabulary.

Return type

Dict[str, int]

convert_tokens_to_string(tokens)[source]

Converts a sequence of tokens (string) to a single string.

build_inputs_with_special_tokens(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None)List[int][source]

Add special tokens to a sequence or a pair of sequence. GPT2 format sentence input:

Parameters
  • token_ids_0 (List[int]) – The token ids of sentence 0.

  • token_ids_1 (List[int], optional) – The token ids of sentence 1. Defaults to None.

Returns

The sequence after adding special toekens.

Return type

List[str]

save_vocabulary(save_directory, filename_prefix=None)[source]

Save the tokenizer vocabulary to a directory. This method does NOT save added tokens and special token mappings. Please use save_pretrained() to save the full Tokenizer state if you want to reload it using the from_pretrained() class method.

class libai.tokenizer.T5Tokenizer(vocab_file, eos_token='</s>', unk_token='<unk>', pad_token='<pad>', extra_ids=100, additional_special_tokens=None, add_bos_token=False, **kwargs)[source]

Construct a T5 tokenizer. Based on SentencePiece <https://github.com/google/sentencepiece>.

Parameters
  • vocab_file (str) – Path to the vocabulary file.

  • eos_token (str, optional, defaults to "</s>") – The end of sequence token.

  • unk_token (str, optional, defaults to "<unk>") – The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.

  • pad_token (str, optional, defaults to "<pad>") – The token used for padding, for example when batching sequences of different lengths.

  • extra_ids (int, optional, defaults to 100) – Add a number of extra ids added to the end of the vocabulary for use as sentinels. These tokens are accessible as “<extra_id_{%d}>” where “{%d}” is a number between 0 and extra_ids-1. Extra tokens are indexed from the end of the vocabulary up to beginning (“<extra_id_0>” is the last token in the vocabulary like in T5 preprocessing see here).

  • additional_special_tokens (List[str], optional) – Additional special tokens used by the tokenizer.

property vocab_size

Size of the base vocabulary (without the added tokens).

get_vocab()[source]

Returns the vocabulary as a dictionary of token to index. tokenizer.get_vocab()[token] is equivalent to tokenizer.convert_tokens_to_ids(token) when token is in the vocab.

Returns

The vocabulary.

Return type

Dict[str, int]

convert_tokens_to_string(tokens)[source]

Converts a sequence of tokens (string) to a single string.

build_inputs_with_special_tokens(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None)List[int][source]

Add special tokens to a sequence or a pair of sequence. T5 format sentence input:

  • single sequence: tokens_a </s>

  • pair of sequences: tokens_a </s> tokens_b </s>

Parameters
  • token_ids_0 (List[int]) – The token ids of sentence 0.

  • token_ids_1 (List[int], optional) – The token ids of sentence 1. Defaults to None.

Returns

The sequence after adding special toekens.

Return type

List[str]

save_vocabulary(save_directory, filename_prefix=None)[source]

Save the tokenizer vocabulary to a directory or file.

class libai.tokenizer.PreTrainedTokenizer(verbose=True, **kwargs)[source]

Base class for all tokenizers.

Handle all the shared methods for tokenization and special tokens, methods dowloading/caching/loading pretrained tokenizers as well as adding tokens to the vocabulary. This class also contains the added tokens in a unified way on top of all tokenizers, so we don’t have to handle the specific vocabulary augmentation methods of the various underlying dictionary structures (BPE, sentencepiece…).

Class attributes (overridden by derived classes):

vocab_files_names: a python dict with, as keys, the __init__ keyword name of each vocabulary file required by the model, and as associated values, the filename for saving the associated file (string).

pretrained_vocab_files_map: a python dict of dict the high-level keys being the __init__ keyword name of each vocabulary file required by the model, the low-level being the short-cut-names (string) of the pretrained models with, as associated values, the url (string) to the associated pretrained vocabulary file.

max_model_input_sizes: a python dict with, as keys, the short-cut-names (string) of the pretrained models, and as associated values, the maximum length of the sequence inputs of this model, or None if the model has no maximum input size.

pretrained_init_configuration: a python dict with, as keys, the short-cut-names (string) of the pretrained models, and as associated values, a dictionnary of specific arguments to pass to the __init__ method of the tokenizer class for this pretrained model when loading the tokenizer with the from_pretrained() method.

Parameters
  • bos_token (str, optional) – A special token representing the beginning of a sentence.

  • eos_token (str, optional) – A special token representing the end of a sentence.

  • unk_token (str, optional) – A special token representing an out-of-vocabulary token.

  • sep_token (str, optional) – A special token separating two different sentences in the same input (used by BERT for instance).

  • pad_token (str, optional) – A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation.

  • cls_token (str, optional) – A special token representing the class of the input (used by BERT for instance).

  • mask_token (str, optional) – A special token representing a masked token (used by masked-language modeling pretraining objectives, like BERT).

  • eod_token (str, optional) – A special token representing the end of a document. additional_special_tokens (tuple or list of str, optional): A tuple or a list of additional special tokens.

classmethod from_pretrained(*inputs, **kwargs)[source]

Instantiate a PreTrainedTokenizer (or a derived class) from a predefined tokenizer.

Parameters
  • pretrained_model_name_or_path (str or os.PathLike) –

    Can be either:

    • a string with the shortcut name of a predefined tokenizer to load from cache or download, e.g.: bert-base-uncased.

    • a path to a directory containing vocabulary files required by the tokenizer, for instance saved using the save_pretrained() method, e.g., ./my_model_directory/.

    • (not applicable to all derived classes) a path or url to a single saved vocabulary file if and only if the tokenizer only requires a single vocabulary file (e.g. Bert, XLNet), e.g., ./my_model_directory/vocab.txt.

  • cache_dir – (optional) string: Path to a directory in which a downloaded predefined tokenizer vocabulary files should be cached if the standard cache should not be used.

  • force_download – (optional) boolean, default False: Force to (re-)download the vocabulary files and override the cached versions if they exist.

  • proxies – (optional) dict, default None: A dictionary of proxy servers to use by protocol or endpoint, e.g., {‘http’: ‘foo.bar:3128’, ‘http://hostname’: ‘foo.bar:4012’}. The proxies are used on each request.

  • inputs – (optional) positional arguments: will be passed to the Tokenizer __init__ method.

  • kwargs – (optional) keyword arguments: will be passed to the Tokenizer __init__ method. Can be used to set special tokens like bos_token, eos_token, unk_token, sep_token, pad_token, cls_token, mask_token, additional_special_tokens. See parameters in the doc string of PreTrainedTokenizer for details.

Examples:

# We can't instantiate directly the base class `PreTrainedTokenizer` so let's
# show our examples on a derived class: BertTokenizer
# Download vocabulary from S3 and cache.
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# If vocabulary files are in a directory (e.g. tokenizer was
# saved using `save_pretrained('./test/saved_model/')`)
tokenizer = BertTokenizer.from_pretrained('./test/saved_model/')
# If the tokenizer uses a single vocabulary file, you can point directly to this file
tokenizer = BertTokenizer.from_pretrained('./test/saved_model/my_vocab.txt')
# You can link tokens to special vocabulary when instantiating
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', unk_token='<unk>')
# You should be sure '<unk>' is in the vocabulary when doing that.
# Otherwise use tokenizer.add_special_tokens({'unk_token': '<unk>'}) instead)
assert tokenizer.unk_token == '<unk>'
save_pretrained(save_directory)[source]

Save the tokenizer vocabulary files together with:

  • added tokens,

  • special-tokens-to-class-attributes-mapping,

  • tokenizer instantiation positional and keywords inputs (e.g. do_lower_case for Bert).

This won’t save modifications other than added tokens and special token mapping, you may have applied to the tokenizer after the instantiation (e.g. modifying tokenizer.do_lower_case after creation). This method make sure the full tokenizer can then be re-loaded using the from_pretrained() class method.

save_vocabulary(save_directory)[source]

Save the tokenizer vocabulary to a directory. This method does NOT save added tokens and special token mappings. Please use save_pretrained() to save the full Tokenizer state if you want to reload it using the from_pretrained() class method.

property vocab_size

Size of the base vocabulary (without the added tokens).

padded_vocab_size(multiple=1)int[source]

Padded the vocabulary with dummy tokens and return the new size.

get_vocab()Dict[str, int][source]

Returns the vocabulary as a dictionary of token to index. tokenizer.get_vocab()[token] is equivalent to tokenizer.convert_tokens_to_ids(token) when token is in the vocab.

Returns

The vocabulary.

Return type

Dict[str, int]

get_added_vocab()Dict[str, int][source]

Returns the added tokens in the vocabulary as a dictionary of token to index.

Returns

The added tokens.

Return type

Dict[str, int]

add_tokens(new_tokens: Union[str, List[str]], special_tokens: bool = False)int[source]

Add a list of new tokens to the tokenizer class. If the new tokens are not in the vocabulary, they are added to it with indices starting from the length of the current vocabulary.

Note

When adding new tokens to the vocabulary, you should make sure to also resize the token embedding matrix of the model so that its embedding matrix matches the tokenizer. In order to do that, please use the resize_token_embeddings() method.

Parameters
  • new_tokens (str, or a list of str) – Tokens are only added if they are not already in the vocabulary.

  • special_tokens (bool, optional, defaults to False) – Can be used to specify if the token is a special token. This mostly change the normalization behavior (special tokens like CLS or [MASK] are usually not lower-cased for instance).

Returns

Number of tokens added to the vocabulary.

Return type

int

Examples:

# Let's see how to increase the vocabulary of Bert model and tokenizer
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')
num_added_toks = tokenizer.add_tokens(['new_tok1', 'my_new-tok2'])
print('We have added', num_added_toks, 'tokens')
 # Notice: resize_token_embeddings expect to receive the full size of the new
 # vocabulary, i.e., the length of the tokenizer.
model.resize_token_embeddings(len(tokenizer))
sanitize_special_tokens()int[source]

Make sure that all the special tokens attributes of the tokenizer (tokenizer.mask_token, tokenizer.cls_token, etc.) are in the vocabulary.

Add the missing ones to the vocabulary if needed.

Returns

The number of tokens added in the vocaulary during the operation.

Return type

int

add_special_tokens(special_tokens_dict: Dict[str, str])int[source]

Add a dictionary of special tokens (eos, pad, cls, etc.) to the encoder and link them to class attributes. If special tokens are NOT in the vocabulary, they are added to it (indexed starting from the last index of the current vocabulary).

Note

When adding new tokens to the vocabulary, you should make sure to also resize the token embedding matrix of the model so that its embedding matrix matches the tokenizer. In order to do that, please use the resize_token_embeddings() method.

Using add_special_tokens will ensure your special tokens can be used in several ways: - Special tokens are carefully handled by the tokenizer (they are never split). - You can easily refer to special tokens using tokenizer class attributes like tokenizer.cls_token. This makes it easy to develop model-agnostic training and fine-tuning scripts. When possible, special tokens are already registered for provided pretrained models (for instance BertTokenizer cls_token is already registered to be :obj`’[CLS]’` and XLM’s one is also registered to be '</s>').

Parameters

special_tokens_dict (dictionary str to str) – Keys should be in the list of predefined special attributes: [bos_token, eos_token, unk_token, sep_token, pad_token, cls_token, mask_token, additional_special_tokens]. Tokens are only added if they are not already in the vocabulary (tested by checking if the tokenizer assign the index of the unk_token to them).

Returns

Number of tokens added to the vocabulary.

Return type

int

Examples:

# Let's see how to add a new classification token to GPT-2
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
special_tokens_dict = {'cls_token': '<CLS>'}
num_added_toks = tokenizer.add_special_tokens(special_tokens_dict)
print('We have added', num_added_toks, 'tokens')
# Notice: resize_token_embeddings expect to receive the full size of the new vocabulary,
# i.e., the length of the tokenizer.
model.resize_token_embeddings(len(tokenizer))
assert tokenizer.cls_token == '<CLS>'
tokenize(text: str, **kwargs)List[str][source]

Converts a string in a sequence of tokens, using the tokenizer. Split in words for word-based vocabulary or sub-words for sub-word-based vocabularies (BPE/SentencePieces/WordPieces). Take care of added tokens.

Parameters
  • text (str) – The sequence to be encoded.

  • **kwargs (additional keyword arguments) – Passed along to the model-specific prepare_for_tokenization preprocessing method.

Returns

The list of tokens.

Return type

List[str]

convert_tokens_to_ids(tokens: Union[str, List[str]])Union[int, List[int]][source]

Converts a token string (or a sequence of tokens) in a single integer id (or a sequence of ids), using the vocabulary.

convert_ids_to_tokens(ids: Union[int, List[int]], skip_special_tokens: bool = False)Union[str, List[str]][source]

Converts a single index or a sequence of indices in a token or a sequence of tokens, using the vocabulary and added tokens.

Parameters
  • ids (int or List[int]) – The token id (or token ids) to convert to tokens.

  • skip_special_tokens (bool, optional, defaults to False) – Whether or not to remove special tokens in the decoding.

Returns

The decoded token(s).

Return type

str or List[str]

convert_tokens_to_string(tokens: List[str])str[source]

Converts a sequence of tokens to a single string. The most simple way to do it is " ".join(tokens) but we often want to remove sub-word tokenization artifacts at the same time.

Parameters

tokens (List[str]) – The token to join in a string.

Returns

The joined tokens.

Return type

str

decode(token_ids, skip_special_tokens=False, clean_up_tokenization_spaces=True, spaces_between_special_tokens: bool = True)[source]

Converts a sequence of ids (integer) in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces. Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids)).

Parameters
  • token_ids – list of tokenized input ids. Can be obtained using the encode or

  • methods. (encode_plus) –

  • skip_special_tokens – if set to True, will replace special tokens.

  • clean_up_tokenization_spaces – if set to True, will clean up the tokenization spaces.

property start_token

Start token of sentence. Common name for bos_token and cls_token.

Type

str

property end_token

End token of sentence. Common name for eos_token and sep_token. Note: eod_token is not considered, because it is often same with eos_token.

Type

str

property bos_token

Beginning of sentence token. Log an error if used while not having been set.

Type

str

property eos_token

End of sentence token. Log an error if used while not having been set.

Type

str

property unk_token

Unknown token. Log an error if used while not having been set.

Type

str

property sep_token

Separation token, to separate context and query in an input sequence. Log an error if used while not having been set.

Type

str

property pad_token

Padding token. Log an error if used while not having been set.

Type

str

property cls_token

Classification token, to extract a summary of an input sequence leveraging self-attention along the full depth of the model. Log an error if used while not having been set.

Type

str

property mask_token

Mask token, to use when training a model with masked-language modeling. Log an error if used while not having been set.

Type

str

property eod_token

End of document token. Log an error if used while not having been set.

Type

str

property additional_special_tokens

All the additional special tokens you may want to use. Log an error if used while not having been set.

Type

List[str]

property bos_token_id

Id of the beginning of sentence token in the vocabulary. Returns None if the token has not been set.

Type

Optional[int]

property eos_token_id

Id of the end of sentence token in the vocabulary. Returns None if the token has not been set.

Type

Optional[int]

property unk_token_id

Id of the unknown token in the vocabulary. Returns None if the token has not been set.

Type

Optional[int]

property sep_token_id

Id of the separation token in the vocabulary, to separate context and query in an input sequence. Returns None if the token has not been set.

Type

Optional[int]

property pad_token_id

Id of the padding token in the vocabulary. Returns None if the token has not been set.

Type

Optional[int]

property cls_token_id

Id of the classification token in the vocabulary, to extract a summary of an input sequence leveraging self-attention along the full depth of the model. Returns None if the token has not been set.

Type

Optional[int]

property mask_token_id

Id of the mask token in the vocabulary, used when training a model with masked-language modeling. Returns None if the token has not been set.

Type

Optional[int]

property eod_token_id

Id of the end of document token in the vocabulary. Returns None if the token has not been set.

Type

Optional[int]

property start_token_id

Id of the start token in the vocabulary. Returns None if the token has not been set.

Type

Optional[int]

property end_token_id

Id of the end token in the vocabulary. Returns None if the token has not been set.

Type

Optional[int]

property additional_special_tokens_ids

Ids of all the additional special tokens in the vocabulary. Log an error if used while not having been set.

Type

List[int]

property special_tokens_map

A dictionary mapping special token class attributes (cls_token, unk_token, etc.) to their values ('<unk>', '<cls>', etc.).

property all_special_tokens

All the special tokens ('<unk>', '<cls>', etc.) mapped to class attributes.

Type

List[str]

property all_special_ids

List the ids of the special tokens ('<unk>', '<cls>', etc.) mapped to class attributes.

Type

List[int]

static clean_up_tokenization(out_string)[source]

Clean up a list of simple English tokenization artifacts like spaces before punctuations and abbreviated forms.