Dampening (SGD)

Do you also have this one very reasonable friend who always slows you down when you have a crazy idea, like opening a bar, for example? Dampening is this friend to momentum.

Dampening makes sure that the optimizer isn't taking too big steps on the loss landscape what might happen if you use momentum only. Finally, the higher the gradient is, the more dampening reduces the step size.

Dampening is not well researched yet, so it's not really proven and mostly set to 0 as default. But it may be interesting to play around with.

Further Resources

Last updated on Jun 01, 2022

Get AI confident. Start using Hasty today.

Automate 90% of the work, reduce your time to deployment by 40%, and replace your whole ML software stack with our platform.