Efficient nets are the family of neural networks with the baseline model constructed with Neural Architecture Search.

Neural Architecture Search is a technique for automating the design of artificial neural networks. The type of artificial neural network that can be designed depends on the search space.

With the help of a neural architecture search that optimizes both accuracy and FLOP (Floating point Operation), we firstly create the baseline model called EfficientNET-B0.

Starting with this baseline model, we perform compound scaling on it and create family of Efficient net models from EfficientNetB1 to B7.

Compound Scaling was the major topic of the paper where Efficient Net was introduced which dealt with scaling the neural network in depth, width and resolution to increase its accuracy.

The neural networks proved to work with less parameters and with more accuracy compared to other state of the art neural networks.

Comparison of Efficient Net with different neural nets on Imagenet.

We can see that the number of parameters in the Efficient net family is significantly low compared to other models.

The hasty tool lets you choose from these different Efficient net.

Note that there is a trade off between the number of parameters and accuracy going from EfficientNetB1 to B7.

It is the weight that is used for model initialization. Here, we use the the weights of the EfficientNetB0 found on Image Net dataset.

python
      import numpy as np
import pandas as pd
import os
import matplotlib.image as mpimg

import torch
import torch.nn as nn
import torch.optim as optim 

import torchvision
from torch.utils.data import DataLoader, Dataset
import torch.utils.data as utils
from torchvision import transforms

import matplotlib.pyplot as plt
%matplotlib inline

import warnings
warnings.filterwarnings("ignore")

data_dir = '../input'
train_dir = data_dir + '/train/train/'
test_dir = data_dir + '/test/test/'

labels = pd.read_csv("../input/train.csv")
labels.head()

class ImageData(Dataset):
    def __init__(self, df, data_dir, transform):
        super().__init__()
        self.df = df
        self.data_dir = data_dir
        self.transform = transform

    def __len__(self):
        return len(self.df)

    def __getitem__(self, index):       
        img_name = self.df.id[index]
        label = self.df.has_cactus[index]

        img_path = os.path.join(self.data_dir, img_name)
        image = mpimg.imread(img_path)
        image = self.transform(image)
        return image, label

data_transf = transforms.Compose([transforms.ToPILImage(), transforms.ToTensor()])
train_data = ImageData(df = labels, data_dir = train_dir, transform = data_transf)
train_loader = DataLoader(dataset = train_data, batch_size = 64)

from efficientnet_pytorch import EfficientNet
model = EfficientNet.from_name('efficientnet-b1')

# Unfreeze model weights
for param in model.parameters():
    param.requires_grad = True

num_ftrs = model._fc.in_features
model._fc = nn.Linear(num_ftrs, 1)

model = model.to('cuda')

optimizer = optim.Adam(model.parameters())
loss_func = nn.BCELoss()

# Train model
loss_log = []

for epoch in range(5):    
    model.train()    
    for ii, (data, target) in enumerate(train_loader):
        data, target = data.cuda(), target.cuda()
        target = target.float()                

        optimizer.zero_grad()
        output = model(data)                

        m = nn.Sigmoid()
        loss = loss_func(m(output), target)
        loss.backward()

        optimizer.step()  

        if ii % 1000 == 0:
            loss_log.append(loss.item())

    print('Epoch: {} - Loss: {:.6f}'.format(epoch + 1, loss.item()))
    

Accelerated Annotation.
Maximize model performance quickly with AI-powered labeling and 100% QA.

Learn more
Last modified