phlower.settings.PhlowerTrainerSetting

class phlower.settings.PhlowerTrainerSetting(*, loss_setting=<factory>, optimizer_setting=<factory>, scheduler_settings=<factory>, handler_settings=<factory>, n_epoch=10, random_seed=0, batch_size=1, num_workers=0, device='cpu', evaluation_for_training=True, log_every_n_epoch=1, initializer_setting=<factory>, lazy_load=True, time_series_sliding=<factory>, parallel_setting=<factory>, non_blocking=False, pin_memory=False)[source]

Bases: BaseModel

Methods

get_device([rank])

get_early_stopping_patience()

Attributes

model_computed_fields

model_config

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_extra

Get extra fields set during validation.

model_fields

model_fields_set

Returns the set of fields that have been explicitly set on this model instance.

loss_setting

setting for loss function

optimizer_setting

setting for optimizer

scheduler_settings

setting for schedulers

handler_settings

setting for handlers

n_epoch

the number of epochs.

random_seed

random seed.

batch_size

batch size.

num_workers

the number of cores.

device

device name.

evaluation_for_training

If True, evaluation for training dataset is performed

log_every_n_epoch

dump log items every nth epoch

initializer_setting

setting for trainer initializer

lazy_load

If True, data is loaded lazily.

time_series_sliding

Setting for sliding window for time series data.

parallel_setting

Setting for parallel processing

non_blocking

If True, non_blocking transfer is used when data is transferred to device.

pin_memory

If True, pin_memory is used in DataLoader.

Parameters:
  • loss_setting (LossSetting)

  • optimizer_setting (OptimizerSetting)

  • scheduler_settings (list[SchedulerSetting])

  • handler_settings (list[Annotated[Annotated[EarlyStoppingSetting, Tag(tag=EarlyStopping)] | Annotated[UserDefinedHandlerSetting, Tag(tag=UserCustom)], Discriminator(discriminator=~phlower.settings._handler_settings._custom_handler_discriminator, custom_error_type=invalid_union_member, custom_error_message=Invalid union member, custom_error_context={'discriminator': 'handler_checkk'})]])

  • n_epoch (int)

  • random_seed (int)

  • batch_size (int)

  • num_workers (int)

  • device (str)

  • evaluation_for_training (bool)

  • log_every_n_epoch (int)

  • initializer_setting (TrainerInitializerSetting)

  • lazy_load (bool)

  • time_series_sliding (TimeSeriesSlidingSetting)

  • parallel_setting (ParallelSetting)

  • non_blocking (bool)

  • pin_memory (bool)

batch_size: int

batch size. Defaults to 1

device: str

device name. Defaults to cpu. When auto is set, device is automatically set to gpu when available, otherwise cpu.

evaluation_for_training: bool

If True, evaluation for training dataset is performed

handler_settings: list[HandlerSettingType]

setting for handlers

initializer_setting: TrainerInitializerSetting

setting for trainer initializer

lazy_load: bool

If True, data is loaded lazily. If False, all data is loaded at once. Defaults to True.

log_every_n_epoch: int

dump log items every nth epoch

loss_setting: LossSetting

setting for loss function

model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'frozen': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

n_epoch: int

the number of epochs. Defaults to 10.

non_blocking: bool

If True, non_blocking transfer is used when data is transferred to device. Defaults to False.

num_workers: int

the number of cores. Defaults to 0.

optimizer_setting: OptimizerSetting

setting for optimizer

parallel_setting: ParallelSetting

Setting for parallel processing

pin_memory: bool

If True, pin_memory is used in DataLoader. Defaults to False.

random_seed: int

random seed. Defaults to 0

scheduler_settings: list[SchedulerSetting]

setting for schedulers

time_series_sliding: TimeSeriesSlidingSetting

Setting for sliding window for time series data. Defaults to inactive.