libai.scheduler

libai.scheduler.WarmupCosineAnnealingLR(optimizer: oneflow.optim.optimizer.Optimizer, max_iter: int, warmup_factor: float, warmup_iter: int, eta_min: float = 0.0, warmup_method: str = 'linear')[source]

Create a schedule with a learning rate that decreases following the values of the Cosine Annealing function between the initial lr set in the optimizer to 0, after a warmup period during which it increases linearly between 0 and the initial lr set in the optimizer.

Parameters
  • optimizer (flow.optim.Optimizer) – Wrapped optimizer.

  • max_iter (int) – Total training iters.

  • warmup_factor (float) – The warmup factor.

  • warmup_iter (int) – The number of warmup steps.

  • eta_min (float, optional) – Minimum learning rate. Defaults to 0.0.

  • warmup_method (str, optional) – The method of warmup, you can choose “linear” or “constant”. In linear mode, the multiplication factor starts with warmup_factor in the first epoch and then inreases linearly to reach 1. Defaults to “linear”.

libai.scheduler.WarmupCosineLR(optimizer: oneflow.optim.optimizer.Optimizer, max_iter: int, warmup_factor: float, warmup_iter: int, alpha: float = 0.0, warmup_method: str = 'linear')[source]

Create a schedule with a learning rate that decreases following the values of the Cosine function between the initial lr set in the optimizer to 0, after a warmup period during which it increases linearly between 0 and the initial lr set in the optimizer.

Parameters
  • optimizer (flow.optim.Optimizer) – Wrapped optimizer.

  • max_iter (int) – Total training iters.

  • warmup_factor (float) – The warmup factor.

  • warmup_iter (int) – The number of warmup steps.

  • alpha (float, optional) – The learning rate scale factor (\(\alpha\)). Defaults to 0.0.

  • warmup_method (str, optional) – The method of warmup, you can choose “linear” or “constant”. In linear mode, the multiplication factor starts with warmup_factor in the first epoch and then inreases linearly to reach 1. Defaults to “linear”.

libai.scheduler.WarmupExponentialLR(optimizer: oneflow.optim.optimizer.Optimizer, max_iter: int, gamma: float, warmup_factor: float, warmup_iter: int, warmup_method: str = 'linear')[source]

Create a schedule with a learning rate that decreases following the values of the Exponential function between the initial lr set in the optimizer to 0, after a warmup period during which it increases linearly between 0 and the initial lr set in the optimizer.

Parameters
  • optimizer (flow.optim.Optimizer) – Wrapped optimizer.

  • max_iter (int) – Total training iters.

  • gamma (float) – Multiplicative factor of learning rate decay.

  • warmup_factor (float) – The warmup factor.

  • warmup_iter (int) – The number of warmup steps.

  • warmup_method (str, optional) – The method of warmup, you can choose “linear” or “constant”. In linear mode, the multiplication factor starts with warmup_factor in the first epoch and then inreases linearly to reach 1. Defaults to “linear”.

libai.scheduler.WarmupMultiStepLR(optimizer: oneflow.optim.optimizer.Optimizer, max_iter: int, warmup_factor: float, warmup_iter: int, milestones: list, gamma: float = 0.1, warmup_method: str = 'linear')[source]

Create a schedule with a learning rate that decreases following the values of the MultiStep function between the initial lr set in the optimizer to 0, after a warmup period during which it increases linearly between 0 and the initial lr set in the optimizer.

Parameters
  • optimizer (flow.optim.Optimizer) – Wrapped optimizer.

  • max_iter (int) – Total training iters.

  • warmup_factor (float) – The warmup factor.

  • warmup_iter (int) – The number of warmup steps.

  • milestones (list) – List of step indices. Must be increasing.

  • gamma (float, optional) – Multiplicative factor of learning rate decay. Defaults to 0.1.

  • warmup_method (str, optional) – The method of warmup, you can choose “linear” or “constant”. In linear mode, the multiplication factor starts with warmup_factor in the first epoch and then inreases linearly to reach 1. Defaults to “linear”.

libai.scheduler.WarmupPolynomialLR(optimizer: oneflow.optim.optimizer.Optimizer, max_iter: int, warmup_factor: float, warmup_iter: int, end_learning_rate: float = 0.0001, power: float = 1.0, cycle: bool = False, warmup_method: str = 'linear')[source]

Create a schedule with a learning rate that decreases as a polynomial decay from the initial lr set in the optimizer to end lr defined by lr_end, after a warmup period during which it increases linearly from 0 to the initial lr set in the optimizer.

Parameters
  • optimizer (flow.optim.Optimizer) – Wrapped optimizer.

  • max_iter (int) – Total training iters.

  • warmup_factor (float) – The warmup factor.

  • warmup_iter (int) – The number of warmup steps.

  • end_learning_rate (float, optional) – The final learning rate. Defaults to 0.0001.

  • power (float, optional) – The power of polynomial. Defaults to 1.0.

  • cycle (bool, optional) – If cycle is True, the scheduler will decay the learning rate every decay steps. Defaults to False.

  • warmup_method (str, optional) – The method of warmup, you can choose “linear” or “constant”. In linear mode, the multiplication factor starts with warmup_factor in the first epoch and then inreases linearly to reach 1. Defaults to “linear”.