Hasty offers a diverse functionality that can be used to streamline your Machine Learning project workflow. In this guide, we dive deep into the Object Detection task and the steps to take to squeeze every last drop of automation from Hasty for such a case.
Object Detection (OD) in Computer Vision (CV) field is a task that aims to locate and classify every object of the specified classes in an image. This guide will demonstrate all the necessary steps to perform an OD task in Hasty successfully.
As you might know, many workplace accidents occur in various industrial sectors because people ignore safety precautions. Modern CV algorithms can detect and report such violations before it is too late. For example, in construction, OD algorithms might identify whether a person wears a hard hat in workplace settings that require it.
For this post, we have built our own Hard hat detection model with the help of Hasty tools. We used the public Hard Hat Workers Object Detection Dataset, so you can quickly reproduce our results if needed.
Without further ado, let’s get started.
The general pipeline of working on an OD task in Hasty is the following:
Upload images or videos to the project;
Import existing annotations if you have some. If that is your case, please move directly to step number 8;
Create classes for your future annotations;
Label the first 10 images and set them as Done or To review. Now you can check out the first version of OD AI Assistant that will be trained right after that;
Continue annotating the images. The OD Assistant will be continuously improving;
If you are satisfied with the performance of the OD Assistant, you might use the Automated Labeling feature to label some or all the left images automatically;
Once you have finished annotating the training set, use Hasty’s AI consensus scoring feature to QA your labels (perform Quality Assurance);
Train a custom model for your task using Hasty’s Model Playground and check it out in Hasty or export somewhere else.
Awesome, you have nailed it!
In Hasty, each task is done in a separate project, so you need to create one before diving into the work. You can do it in your workspace (in our case, it is the “Hasty demo” workspace) by clicking on the Create new project button.
Once you click the button, the pop-up window will appear. You will have to specify the following:
After filling in the fields, click on the Create button.
You will get to your project’s Dashboard page (it will be called “Your_Project_Name Dashboard “).
On your left, you will see the menu. Please navigate to the Images & Datasets page in the Content section.
Throughout the project, you will likely need many sets for various purposes, such as training, validation, and test set.
To create a new dataset, click the Create new button in the Datasets section. In our case, we created a train set and a test set.
Now it is time to fill the datasets with data assets (in this case, images). Make sure you have the correct dataset selected. Use the Upload Files panel to upload your data.
At this point, you already have raw data uploaded to your Hasty project, so please navigate to the annotation environment via the Start annotating button in the upper right corner.
Before diving into the annotation, you should create classes for your future labels. You can do that in the annotation environment in the Label Classes section on your right.
In our case, the classes were “Head” and “Hard Hat”.
If you already have existing annotations you want to import, please check the official guide. In our case, despite Hard Hat Workers Object Detection Dataset having labels already, we decided to annotate the images ourselves.
Hasty supports many annotation tools, including both manual and AI-powered ones.
For the OD task, we used only the Bounding Box tool as it was the easiest way to label images. You can find it in the Manual Tools section on your left.
For example, we chose “Hard Hat” as the active class and drew two bounding boxes for this image.
Once you label any image, please set it as To review or Done. First of all, it will massively help you to navigate through the project. Second, it will signal the AI assistant that this image is annotated and can be taken for the assistant’s training.
The AI assistants help you label the images automatically. They are not activated at the start of the project and need you to label some images for them to train.
Your AI assistant is the “Object Detection” when working on an OD task. You can find its logo on your left in the AI assistants section.
To trigger the assistant’s training, we annotated 10 images.
As soon as the assistant is ready, you will be notified about it. Now you can test it's potential straight away.
Select an image, click on the OD assistant’s icon, and start getting the annotation suggestions.
We tested the OD assistant on Confidence = 30 since we were interested in whether the model would produce clear-cut mistakes and artifacts with relatively high confidence.
As you can see, the assistant did not perform perfectly. It made some mistakes and produced weird artifacts at first. However, these were the results of the assistant training on only 10 images from the entire dataset. An ML model can not work adequately after seeing such a small batch of the training dataset. That is why the assistants improve over time throughout the whole annotation process. It works as follows: you annotate more samples, and Hasty repeatedly retrains models on the labeled part of your project.
In the project’s Dashboard, you can check the status of your assistants and see when Hasty will trigger the model retraining next.
We fully annotated our training set using Bounding Box and OD assistant. It took plenty of time due to the dataset’s scale, but the assistant saved us a great deal of time. Its performance improved massively throughout the project, and at some point, we accepted all the assistant’s suggestions without labeling anything manually.
This is how the assistant worked on the latest training images.
As you might know, the annotation process does not end when the last image is labeled. You need to ensure you have not made any crucial mistakes during annotations.
The conventional approach is to check all the images manually. Still, we recommend using Hasty’s AI Consensus Scoring (AICS) feature to automate the QA process. You can access it in the project’s Dashboard or the annotation environment.
Click on the Create new run button and fill in all the necessary fields.
We suggest ticking the Retrain model box. With it, you can be sure that the AI CS feature will use the model trained on the latest data. Then click on the Create button to start the run.
It might take some time, but once the run is complete, click on its name, and you will get to the Summary page. The results of each run are presented in the form of a dashboard, so you can review the suggestions and fix the issues from one place. Also, Hasty allows you to filter the suggestions by the error type. For more details on the AI CS feature, please check our documentation.
In our case, AI CS found many labels we missed while annotating the training set. We quickly fixed all the issues with the dashboard and ensured that the annotations were good. Therefore, we did not spend much time on the QA process and immediately switched to the fascinating task of training custom ML models.
You can do this in Hasty with our no-code solution, Model Playground. You can access it through the project’s Dashboard or directly in the annotation environment.
In Model Playground, you first need to create a new data split through the Create new split button in the upper right corner. Please fill in all the fields in the pop-up window and proceed by clicking Create.
The split will be created in a matter of minutes. Please click on it once it becomes active and then on the New experiment button. You will get to the experiment’s menu, where you can control every available parameter and transform without spending time on MLOps.
In our case, we added augmentations (Horizontal Flip and Rotate) and a scheduler (ReduceOnPlateau), adjusted the depth of the backbone ResNet model (101), and started the experiment. It took a model about half an hour to train, and as soon as it was ready, we deployed it as the corresponding AI assistant in the annotation environment. To do so, please navigate to the Deploy & export page in the split menu, choose the model in the Deploy section, and click Update model.
Let’s check how the custom model worked on the test images. We chose the complex samples with many potential objects of various classes to see if the model could perform adequately in such circumstances.
As you can see, the custom model worked better than the assistant as it predicted adequate labels for many difficult pictures with overlapping objects of different classes (with the confidence threshold set at 30). Also, the model did not produce many artifacts or extra labels with the confidence higher than 30.
Of course, the custom model still made some mistakes and produced artifacts. Unfortunately, using only one dataset can not fix it in real life. Sometimes AI teams create separate subsets addressing each edge case to try and overcome the problems.
To further improve the solution, we should add more data reflecting the edge cases. However, from the user’s perspective, the general performance of the custom model as the AI assistant was good, as its suggestions were valuable. So, the results of the work we did in this post are promising. Adding more data and spending more time and effort on QA&QC and model training and tuning will help you get a production-ready model for the “Hard Hats” use case in weeks.
Hopefully, this post helped you get a more comprehensive view of using Hasty for an Object Detection task.
Only 13% of vision AI projects make it to production. With Hasty, we boost that number to 100%.
Our comprehensive vision AI platform is the only one you need to go from raw data to a production-ready model. We can help you with:
Labeling 10x faster with our AI Assistants.
Automating quality control, making it 35x faster, with our AI Consensus Scoring feature.
Train models in our no-code Model Playground, which can then be used to improve labeling and QA automation even further.
All while keeping you in control and your data safe.
All the data and models you create always belong to you and can be exported and used outside of Hasty at any given time entirely for free.
You can try Hasty by signing up for free here. If you are looking for additional services like help with ML engineering, we also offer that. Check out our service offerings here to learn more about how we can help.
Thanks for reading, and happy training!