All content for semantic segmentation

Development User documentation

Creating a pro workspace

To create a new workspace, click the "Create new pro workspace" at the top of the projects view. This will take you to a modal where you can first …

Development User documentation

Credits and pricing FAQ

Hasty uses credit-based pricing. This can be confusing to some users, leading to the question... What is a credit? Credits are our own in-tool …

Development Deploy MP Wiki

CyclicLR

Learning rate is the most important hyperparameter in training deep neural networks. CyclicLR eliminates the need to tune the learning rate. The …

Development Deploy MP Wiki

Dampening (SGD)

Do you also have this one very reasonable friend who always slows you down when you have a crazy idea, like opening a bar, for example? Dampening is …

Development User documentation

Dashboard widgets and visualizations

Some common dashboard widgets that are available irrespective of the type of problem GPU Consumption Best performing model Runtime Inference …

Development User documentation

Data Split

Page the user sees when they enter Model Playground This page essentially gives an overview of the various splits already tried when a user enters …

Development Deploy MP Wiki

DeepLabv3+

DeepLabv3+ is a semantic segmentation architecture that builds on DeepLabv3 by adding a simple yet effective decoder module to enhance segmentation …

Deploy User documentation

Deploy

Deploy dashboard The Deploy option in the Deploy & export page allows you to choose a model from the available experiments to choose as the …

Development Deploy MP Wiki

Efficient Net

Efficient nets are the family of neural networks with the baseline model constructed with Neural Architecture Search . Neural Architecture Search is …

Deploy Development MP Wiki

Epoch

The number of epochs is a hyperparameter that defines the number of times that the learning algorithm will work through the entire training dataset. …

Development Deploy MP Wiki

Epsilon Cooefficient

We shall make use of Adam optimization to briefly explain the epsilon coefficient. For the Adam optimizer, we know that the first and second moments …

Development Deploy MP Wiki

ExponentialLR

This scheduling technique reduces the learning rate every epoch (or every eval period in case of iteration trainer) by a factor "gamma". At the last …

Previous Next

Get AI confident. Start using Hasty today.

Our platform is completely free to try. Sign up today to start your two-month trial.

Tuple

Hasty.ai helped us improve our ML workflow by 40%, which is fantastic. It reduced our overall investment by 90% to get high-quality annotations and an initial model.