hezar.metrics.precision module

class hezar.metrics.precision.Precision(config: PrecisionConfig, **kwargs)[source]

Bases: Metric

Precision metric for evaluating classification performance using sklearn’s precision_score.

Parameters:
  • config (PrecisionConfig) – Metric configuration object.

  • **kwargs – Extra configuration parameters passed as kwargs to update the config.

compute(predictions=None, targets=None, labels=None, pos_label=1, average=None, sample_weight=None, zero_division=None, n_decimals=None, output_keys=None)[source]

Computes the Precision score for the given predictions against targets.

Parameters:
  • predictions – Predicted labels.

  • targets – Ground truth labels.

  • labels – List of labels to include in the calculation.

  • pos_label (int) – Label of the positive class.

  • average (str) – Type of averaging for the precision score.

  • sample_weight (Iterable[float]) – Sample weights for the precision score.

  • zero_division (str | float) – Strategy for zero-division, default is 0.0.

  • n_decimals (int) – Number of decimals for the final score.

  • output_keys (tuple) – Filter the output keys.

Returns:

A dictionary of the metric results, with keys specified by output_keys.

Return type:

dict

required_backends: List[str | Backends] = [Backends.SCIKIT]
class hezar.metrics.precision.PrecisionConfig(objective: str = 'maximize', output_keys: tuple = ('precision',), n_decimals: int = 4, pos_label: int = 1, average: str = 'macro', sample_weight: Iterable[float] | None = None, zero_division: str | float = 0.0)[source]

Bases: MetricConfig

Configuration class for Precision metric.

Parameters:
  • name (MetricType) – The type of metric, Precision in this case.

  • pos_label (int) – Label of the positive class.

  • average (str) – Type of averaging for the precision score.

  • sample_weight (Iterable[float]) – Sample weights for the precision score.

  • zero_division (str | float) – Strategy for zero-division, default is 0.0.

  • output_keys (tuple) – Keys to filter the metric results for output.

average: str = 'macro'
name: str = 'precision'
objective: str = 'maximize'
output_keys: tuple = ('precision',)
pos_label: int = 1
sample_weight: Iterable[float] = None
zero_division: str | float = 0.0