hezar.models.sequence_labeling.roberta.roberta_sequence_labeling module

A RoBERTa Language Model (HuggingFace Transformers) wrapped by a Hezar Model class

class hezar.models.sequence_labeling.roberta.roberta_sequence_labeling.RobertaClassificationHead(config)[source]

Bases: Module

Head for sentence-level classification tasks.

forward(inputs, **kwargs)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class hezar.models.sequence_labeling.roberta.roberta_sequence_labeling.RobertaSequenceLabeling(config, **kwargs)[source]

Bases: Model

A standard 🤗Transformers RoBERTa model for sequence labeling

Parameters:

config – The whole model config including arguments needed for the inner 🤗Transformers model.

compute_loss(logits: Tensor, labels: Tensor) Tensor[source]

Compute loss on the model outputs against the given labels

Parameters:
  • inputs – Input tensor to compute loss on

  • targets – Target tensor

Returns:

Loss tensor

forward(token_ids, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, **kwargs)[source]

Forward inputs through the model and return logits, etc.

Parameters:

model_inputs – The required inputs for the model forward

Returns:

A dict of outputs like logits, loss, etc.

post_process(model_outputs: Dict[str, Tensor], return_offsets: bool = False, return_scores: bool = False)[source]

Process model outputs and return human-readable results. Called in self.predict()

Parameters:
  • model_outputs – model outputs to process

  • **kwargs – extra arguments specific to the derived class

Returns:

Processed model output values and converted to human-readable results

preprocess(inputs: str | List[str], **kwargs)[source]

Given raw inputs, preprocess the inputs and prepare them for model’s forward().

Parameters:
  • raw_inputs – Raw model inputs

  • **kwargs – Extra kwargs specific to the model. See the model’s specific class for more info

Returns:

A dict of inputs for model forward

required_backends: List[Backends | str] = [Backends.TRANSFORMERS, Backends.TOKENIZERS]
skip_keys_on_load = ['roberta.embeddings.position_ids', 'model.embeddings.position_ids']
tokenizer_name = 'bpe_tokenizer'