hezar.configs module

Configs are at the core of Hezar. All core modules like Model, Preprocessor, Trainer, etc. take their parameters as a config container which is an instance of Config or its derivatives. A Config is a Python dataclass with auxiliary methods for loading, saving, uploading to the hub, etc.

Examples

>>> from hezar.configs import ModelConfig
>>> config = ModelConfig.load("hezarai/bert-base-fa")
>>> from hezar.models import BertMaskFillingConfig
>>> bert_config = BertMaskFillingConfig(vocab_size=50000, hidden_size=768)
>>> bert_config.save("saved/bert", filename="model_config.yaml")
>>> bert_config.push_to_hub("hezarai/bert-custom", filename="model_config.yaml")
class hezar.configs.Config[source]

Bases: object

Base class for all configs in Hezar.

All configs are simple dataclasses that have some customized functionalities to manage their attributes. There are also some Hezar specific methods: load, save and push_to_hub.

config_type: str = 'base'
dict()[source]

Returns the config object as a dictionary (works on nested dataclasses too)

Returns:

The config object as a dictionary

classmethod fields()[source]
classmethod from_dict(dict_config: Dict | DictConfig, **kwargs)[source]

Load config from a dict-like object. Nested configs are also recursively converted to their classes if possible.

get(key, default=None)[source]
keys()[source]
classmethod load(hub_or_local_path: str | os.PathLike, filename: str | None = None, subfolder: str | None = None, repo_type: str = None, cache_dir: str = None, **kwargs) Config[source]

Load config from Hub or locally if it already exists on disk (handled by HfApi)

Parameters:
  • hub_or_local_path – Local or Hub path for the config

  • filename – Configuration filename

  • subfolder – Optional subfolder path where the config is in

  • repo_type – Repo type e.g, model, dataset, etc

  • cache_dir – Path to cache directory

  • **kwargs – Manual config parameters to override

Returns:

A Config instance

name: str = None
push_to_hub(repo_id: str, filename: str, subfolder: str | None = None, repo_type: str | None = 'model', skip_none_fields: bool | None = True, private: bool | None = False, commit_message: str | None = None)[source]

Push the config file to the hub

Parameters:
  • repo_id (str) – Repo name or id on the Hub

  • filename (str) – config file name

  • subfolder (str) – subfolder to save the config

  • repo_type (str) – Type of the repo e.g, model, dataset, space

  • skip_none_fields (bool) – Whether to skip saving None values or not

  • private (bool) – Whether the repo type should be private or not (ignored if the repo exists)

  • commit_message (str) – Push commit message

save(save_dir: str | os.PathLike, filename: str, subfolder: str | None = None, skip_none_fields: bool | None = True)[source]

Save the *config.yaml file to a local path

Parameters:
  • save_dir – Save directory path

  • filename – Config file name

  • subfolder – Subfolder to save the config file

  • skip_none_fields (bool) – Whether to skip saving None values or not

update(d: dict, **kwargs)[source]

Update config with a given dictionary or keyword arguments. If a key does not exist in the attributes, prints a warning but sets it anyway.

Parameters:
  • d – A dictionary

  • **kwargs – Key/value pairs in the form of keyword arguments

Returns:

The config object itself but the operation happens in-place anyway

class hezar.configs.DatasetConfig(task: TaskType | List[TaskType] = None, path: str = None)[source]

Bases: Config

Base dataclass for all dataset configs

config_type: str = 'dataset'
name: str = None
path: str = None
task: TaskType | List[TaskType] = None
class hezar.configs.EmbeddingConfig(bypass_version_check: bool = False)[source]

Bases: Config

Base dataclass for all embedding configs

bypass_version_check: bool = False
config_type: str = 'embedding'
name: str = None
class hezar.configs.MetricConfig(objective: Literal['maximize', 'minimize'] = None, output_keys: List | Tuple = None, n_decimals: int = 4)[source]

Bases: Config

Base dataclass config for all metric configs

config_type: str = 'metric'
n_decimals: int = 4
name: str = None
objective: Literal['maximize', 'minimize'] = None
output_keys: List | Tuple = None
class hezar.configs.ModelConfig[source]

Bases: Config

Base dataclass for all model configs

config_type: str = 'model'
name: str = None
class hezar.configs.PreprocessorConfig[source]

Bases: Config

Base dataclass for all preprocessor configs

config_type: str = 'preprocessor'
name: str = None
class hezar.configs.TrainerConfig(output_dir: str, task: str | TaskType, device: str = 'cuda', num_epochs: int = None, init_weights_from: str = None, num_dataloader_workers: int = 0, dataloader_drop_last: bool = False, dataloader_shuffle: bool = True, seed: int = 42, optimizer: str | OptimizerType = None, learning_rate: float = 2e-05, weight_decay: float = 0.0, lr_scheduler: str | LRSchedulerType = None, lr_scheduler_kwargs: Dict[str, Any] = None, batch_size: int = None, eval_batch_size: int = None, gradient_accumulation_steps: int = 1, distributed: bool = False, mixed_precision: PrecisionType | str | None = None, use_cpu: bool = False, evaluate_with_generate: bool = True, metrics: List[str | MetricConfig] = None, metric_for_best_model: str = 'evaluation.loss', save_enabled: bool = True, save_freq: int = None, save_steps: int = None, checkpoints_dir: str = 'checkpoints', logs_dir: str = 'logs')[source]

Bases: Config

Base dataclass for all trainer configs

Parameters:
  • task (str, TaskType) – The training task. Must be a valid name from TaskType.

  • output_dir (str) – Path to the directory to save trainer properties.

  • device (str) – Hardware device e.g, cuda:0, cpu, etc.

  • num_epochs (int) – Number of total epochs to train the model.

  • init_weights_from (str) – Path to a model from disk or Hub to load the initial weights from.

  • num_dataloader_workers (int) – Number of dataloader workers, defaults to 4 .

  • seed (int) – Control determinism of the run by setting a seed value. defaults to 42.

  • optimizer (OptimizerType) – Name of the optimizer, available values include properties in OptimizerType enum.

  • learning_rate (float) – Initial learning rate for the optimizer.

  • weight_decay (float) – Optimizer weight decay value.

  • lr_scheduler (LRSchedulerType) – Optional learning rate scheduler among LRSchedulerType enum.

  • lr_scheduler_kwargs (Dict[str, Any]) – LR scheduler instructor kwargs depending on the scheduler type

  • batch_size (int) – Training batch size.

  • eval_batch_size (int) – Evaluation batch size, defaults to batch_size if None.

  • gradient_accumulation_steps (int) – Number of updates steps to accumulate before performing a backward/update pass, defaults to 1.

  • distributed (bool) – Whether to use distributed training (via the accelerate package)

  • mixed_precision (PrecisionType | str) – Mixed precision type e.g, fp16, bf16, etc. (disabled by default)

  • evaluate_with_generate (bool) – Whether to use generate() in the evaluation step or not. (only applicable for generative models).

  • metrics (List[str | MetricConfig]) – A list of metrics. Depending on the valid_metrics in the specific MetricsHandler of the Trainer.

  • metric_for_best_model (str) – Reference metric key to watch for determining the best model. Recommended to have a {train. | evaluation.} prefix (e.g, evaluation.f1, train.accuracy, etc.) but if not, defaults to evaluation.{metric_for_best_model}.

  • save_freq (int) (DEPRECATED) – Deprecated and renamed to save_steps.

  • save_steps (int) – Save the trainer outputs every save_steps steps. Leave as 0 to ignore saving between training steps.

  • checkpoints_dir (str) – Path to the checkpoints’ folder. The actual files will be saved under {output_dir}/{checkpoints_dir}.

  • logs_dir (str) – Path to the logs’ folder. The actual log files will be saved under {output_dir}/{logs_dir}.

batch_size: int = None
checkpoints_dir: str = 'checkpoints'
config_type: str = 'trainer'
dataloader_drop_last: bool = False
dataloader_shuffle: bool = True
device: str = 'cuda'
distributed: bool = False
eval_batch_size: int = None
evaluate_with_generate: bool = True
gradient_accumulation_steps: int = 1
init_weights_from: str = None
learning_rate: float = 2e-05
logs_dir: str = 'logs'
lr_scheduler: str | LRSchedulerType = None
lr_scheduler_kwargs: Dict[str, Any] = None
metric_for_best_model: str = 'evaluation.loss'
metrics: List[str | MetricConfig] = None
mixed_precision: PrecisionType | str | None = None
name: str = 'trainer'
num_dataloader_workers: int = 0
num_epochs: int = None
optimizer: str | OptimizerType = None
output_dir: str
save_enabled: bool = True
save_freq: int = None
save_steps: int = None
seed: int = 42
task: str | TaskType
use_cpu: bool = False
weight_decay: float = 0.0