All annotation is now free in Hasty

Book demo

Existing customer? Sign in

# FBNetV3OD

FBNetV3 makes up a family of state-of-art compact neural networks that are generated through Network Architecture Recipe Search, NARS. NARS is an advanced version of Network Architecture Search that searches for both the architecture and the training recipes. FBNetV3 has been shown to improve the mAP (mean Average Precision).

FBnetV3OD is an object detection architecture that is generated by FBNetV3. In comparison to FBNetV3IS, FBNetV3OD doesn't have any mask heads since the task required in Object detection is to only create bounding boxes around the objects of interest rather than pixel-wise masking them.

## Parameters

### Backbone Network

Many image segmentation or object detection tasks are using feature extraction and use of regional proposals as it was proven to be more cost-effective. Therefore, FBNetV3 has a similar backbone network at the beginning to extract such features.

### Pooler Resolution

It is the size to pool proposals before feeding them to the box predictor. In the model playground, the default value is set as 6.

### Weights

Before the training process, the weights in the neural network have to be initialized to a certain value. The users will initialize the weights to FBNetV3a-DSMask-C4 COCO.

### IOU Threshold

The IOU threshold is used to decide whether the bounding box contains a background or an object.

Everything above the value of the upper bound will be classified as objects and everything lower than the lower bound will be classified as background. The values in between the lower and the upper bound are ignored.

### Stages to Freeze

Freezing the stages is useful when you have a relatively small amount of data and the data doesn't differ much from the ones that created the initial weights. For example, weights can be initialized with FBNetV3a-DSMask-C4 COCO while using FBNetV3 for object detection in hasty. If your data is similar to COCO and has a relatively small amount of data, then it might be a good idea to freeze some of the initial k layers and only train the remaining (n-k) layers. This prevents the overfitting of the model and also reduces the time to train the model.

### Normalization method

Normalization techniques help to decrease the overall training time of the model. It makes the contribution of the features uniform by normalizing the weights. This also helps to avoid the weights from exploding and hence makes the optimization faster.

There is three available normalization methods in the model playground:

• GN
• SyncBN
• naiveSyncBN

### SyncBN

In this normalization techniques, where the weights are scaled and shifted by the variance and the mean. Mathematically, it is given as,

$$\hat{x}=\frac{x-E(x)}{\sqrt{Var(x)}-\epsilon}\ y=\gamma \cdot \hat{x}+\beta$$

The mean and standard deviation are calculated per dimension overall mini-batches of the same process groups. Later again, the scaling and shifting happen with other two constants: €€\gamma€€ and €€\beta€€. These are hyperparameters and are usually learnable through the network.

### NaiveSyncBN

In this normalization technique, the weights are assigned equally to all the images regardless of their dimension. With this, we reduce the need to accurately compute the mean and variance for each of the batches. A little difference has been observed between such simplified calculation and accurate mean and variance calculation.

### GN

Group Batch normalization, abbreviated as GN, is another normalization technique that normalizes a group of parameters. If the input dimension is 50, then the GN normalization can group those 50 parameters in a group of 5, and normalize each group with its own mean and variance.

### Pre NMS number of proposals

It is the maximum number of proposals to be considered before the non-maximal suppression. The proposals are sorted descending after confidence and only the ones with the highest confidence are chosen.

### Post NMS number of proposals

It is the maximum number of proposals to be considered after the non-maximal suppression. The probability of detecting more objects is high if this number is high but the computation cost is also increased since more regional proposals have to be processed.

## Removing the risk from vision AI.

Only 13% of vision AI projects make it to production, with Hasty we boost that number to 100%.