hezar.models.mask_filling.roberta.roberta_mask_filling module

A RoBERTa Language Model (HuggingFace Transformers) wrapped by a Hezar Model class

class hezar.models.mask_filling.roberta.roberta_mask_filling.RobertaMaskFilling(config, **kwargs)[source]

Bases: Model

compute_loss(logits: Tensor, labels: Tensor) Tensor[source]

Compute loss on the model outputs against the given labels

Parameters:
  • inputs – Input tensor to compute loss on

  • targets – Target tensor

Returns:

Loss tensor

forward(token_ids, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, output_attentions=None, output_hidden_states=None, **kwargs)[source]

Forward inputs through the model and return logits, etc.

Parameters:

model_inputs – The required inputs for the model forward

Returns:

A dict of outputs like logits, loss, etc.

loss_func_name: str | LossType = 'cross_entropy'
post_process(model_outputs: dict, top_k=1)[source]

Process model outputs and return human-readable results. Called in self.predict()

Parameters:
  • model_outputs – model outputs to process

  • **kwargs – extra arguments specific to the derived class

Returns:

Processed model output values and converted to human-readable results

preprocess(inputs: str | List[str], **kwargs)[source]

Given raw inputs, preprocess the inputs and prepare them for model’s forward().

Parameters:
  • raw_inputs – Raw model inputs

  • **kwargs – Extra kwargs specific to the model. See the model’s specific class for more info

Returns:

A dict of inputs for model forward

required_backends: List[Backends | str] = [Backends.TRANSFORMERS, Backends.TOKENIZERS]
skip_keys_on_load = ['model.embeddings.position_ids', 'roberta.embeddings.position_ids']
tokenizer_name = 'bpe_tokenizer'