Product

## All content for project and product managers

Adam can be understood as updating weights inversely proportional to the scaled L2 norm (squared) of past gradients. AdaMax extends this to the …

AdamW is very similar to Adam . It only differs in the way how the weight decay is implemented. The way how it's implemented in Adam came from the …

In advanced options we currently only have one option available to users. The "Automated tools generate..." toggle allows you to decide if our …

AI assistants status overview

Out of the many questions we get from our users, many concerns the status and training of our AI assistant models. We’re the first to admit this has …

While the Adam optimizer, which made use of momentum as well as the RMS prop, was efficient in adjusting the learning rates and finding the optimal …

ASGD

Average Stochastic Gradient Descent, abbreviated as ASGD, averages the weights that are calculated in every iteration.  w_{t+1}=w_t-\eta \nabla …

Attributor

Attributors are used in combination with other models such as object detectors , semantic and instance segmentors . They add a layer of meta-data to …

Automated labelling

Automated labelling is a way for you to batch-process images in your project. Essentially, you take a model used in your project (Object Detection …

Average Loss

Average loss is the average of various losses that is arise in a model. Average loss varies from model to model since different types of loss arise …

Base Learning Rate

The learning rate defines how large the steps of your optimizer are on your loss landscape. The base learning rate defines at which learning rate …

Basics

On the first setup screen, you can set the name of a project and add a project description. When you are done editing any fields on this page, don't …

Batch Size

The batch size depicts the number of samples that propagate through the neural network before updating the model parameters. Each batch of samples …