Adadelta was proposed with the aim to solve the diminishing learning rate problem that was seen in the Adagrad. Adagrad uses the knowledge of all the past gradients in its update, whereas Adadelta uses only a certain "window" of past gradients to update the parameters.

The mathematical formulation of Adadelta is similar to that of RMSprop.

Both Adadelta and RMSprop were developed independently to eliminate the problem of Adagrad. They are suitable for optimizing non-stationary and non-convex problems.

Major Parameters


Rho is same as the €€\beta€€ of RMSprop. It is the smoothing constant whose value ranges from 0 to 1. Higher value of Rho suggests that more number of previously calculated squares of gradient are taken into account, making the curve relatively "smooth".

Code Implementation

\# importing the library
import torch
import torch.nn as nn

x = torch.randn(10, 3)
y = torch.randn(10, 2)

\# Build a fully connected layer.
linear = nn.Linear(3, 2)

\# Build MSE loss function and optimizer.
criterion = nn.MSELoss()

\# Optimization method using RMSprop
optimizer = torch.optim.RMSProp(linear.parameters(), lr=1.0, rho=0.9, eps=1e-06, weight_decay=0)

\# Forward pass.
pred = linear(x)

\# Compute loss.
loss = criterion(pred, y)
print('loss:', loss.item())


Get AI confident. Start using Hasty today.

Our platform is completely free to try. Sign up today to start your two-month trial.