NEW
All annotation is now free in Hasty.

MultiStepLR

MultiStepLR is a scheduler that decays the learning rate with a certain multiplicative factor set by the user when a certain milestone is reached. The milestone is the number of epochs which is set by the user.

Major Parameters

Milestones

They are a pair of values that sets the number of epochs after which the learning rate is scaled.

Gamma

It is the multiplicative factor by which the learning rate is scaled.

Mathematical Demonstration

Let us demonstrate the functioning of the MultiStepLR with a simple calculation.

If Milestones are set to be 30 and 80 base learning rate being 0.05, and gamma 0.1, then

for, €€0<=epoch<30€€ , €€lr=0.05€€

for, €€30<=epoch<80€€ , €€lr=0.05*0.1=0.005€€

for , €€epoch>=80€€ , €€lr=0.05*0.1^2=0.0005€€

Code Implementation


import torch
model = [Parameter(torch.randn(2, 2, requires_grad=True))]
optimizer = torch.optim.AdamW(model.parameters(), lr=learning_rate, weight_decay=0.01, amsgrad=False)
scheduler=torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[30,80], gamma=0.1, last_epoch=-1, verbose=False)
for epoch in range(20):
    for input, target in dataset:
        optimizer.zero_grad()
        output = model(input)
        loss = loss_fn(output, target)
        loss.backward()
        optimizer.step()
    scheduler.step()
Last updated on Jun 01, 2022

Removing the risk from vision AI.

Only 13% of vision AI projects make it to production, with Hasty we boost that number to 100%.