This assistant will help you with finding, creating, and labeling shapes and offers an exponentially quicker way of preparing data for semantic segmentation.
It can be selected by pressing this icon:
or by pressing “S”.
Important: The assistant will not be available when you start a new project. The reason for this is that to work, the underlying model first needs data. That data is the annotations that are made in a project. We start training the tool after you have set 10 images to "Done" or "To review".
The assistant is straightforward to use. When selected, you will see shapes with an orange dotted border like this:
Notice that these shapes can have different colors. These colors are the same as your label classes, but a bit more transparent. What you are shown here are potential objects that our model has found in your image, and what label class the algorithm thinks the object has.
For you as a user, the only thing you have to do to accept our algorithm’s suggestion is to left-click on the shape you want to select.
You can also accept all suggestions, as we do in the GIF above, by pressing "enter".
Please note that all suggestions can be edited after you've accepted them. Just select the annotation using the [move/edit tool]/content-hub/userdocs/annotation-environment/move-edit) and edit them to your satisfaction.
If you don’t see any shapes or if they are not covering the objects in the way that you want, you can adjust the confidence modifier.
If you are curious about seeing the assistants in practice before you start a new project, check out our Getting started with Hasty tutorial.
Semantic segmentation is the hardest type of annotation problem to build automation for. Without going into too much detail, the shape of annotations tend to be irregular which means our model needs more data to be accurate. Having said that, keep in mind that what makes Hasty different, is that our AI assistants improve the more data they see. After 20 images, you might have a 10% assistance rate (percentage of annotations created by assistants). When you've annotated 100 images, it might be 40-65%. When you have thousands of images annotated, we can offer very high percentages of automation - everything from 94% to 88% - depending on the use case.
The models retrain after having seen 20% more data. That means a new training is initialized after 12, 15, 19 images etc.
This modifier controls which potential shapes you are shown. The higher the confidence value, the fewer potential shapes you see, but those shapes are the ones that our model is the most confident in. The modifier can be changed by adjusting the value in the tool settings bar or by using the hotkeys “,” and “.”.
**To see this assistant in action, check out our semantic segmentation page on our website.
You can also check out our MP wiki for semantic segmentation if you are interested in creating and experimenting with models.**