Out of the many questions we get from our users, many concerns the status and training of our AI assistant models. We’re the first to admit this has been a bit of a black box, with users asking us questions like:
Now, you can answer these questions yourself. With our new AI assistants status page, you can see how models improve over time to get an idea of how you are progressing towards annotation automation.
Go to your project settings:
When in your project settings, go to AI assistants status:
You can now see how your models are performing and changing over time.
With our new AI assistants status page, you can see how models improve over time to get an idea of how you are progressing towards annotation automation. Every potential model available in Hasty is displayed here. If a model has been successfully trained you will see this:
There are four pieces of important information:
We hope that this new visibility into how models perform in Hasty can be used to better understand what's going on behind the scenes and how automation improves over time.
What's important to know here, is that you might not see a gradual improvement from the start. Machine learning models are fickle beasts and often take some time to become accurate.
Here's an example from an internal demo project we did for a customer demo. What we see is the training loss starting very low with our validation loss being much higher. As we continued to annotate, we got the two metrics converging closer to each other, indicating a better-performing model.