CloudFactory has acquired to offer a complete end-to-end Vision AI solution.
30.07.2021 — Alex Wennman

Automated quality control, opening up the AI black box, and more

We’ll go through the most critical parts of the new release and give you a bit more insight into what we’ve built and how it might benefit you.

Automated quality control, opening up the AI black box, and more

AI-automation for quality control

State-of-the-art research available through the click of a button

After more than six months of development, we are releasing our feature for AI quality control based on state-of-art-research. By using specific AI models, we can:

We do this by running your data through a specific model looking at all the labels you want to QA. We then compare the output of the model with the original annotation and see how they align. For example, let’s say that we are annotating a football game (soccer for our American readers), and we want to check that all classes are correct before training our model. We use Confident Learning techniques to find wrongly assigned labels in your data.

To make your life a bit easier, we take the results and sort them from the likeliest error (i.e., most significant gap between model and human) to the least likely error.

An example of Error Finder
In this example, we see some of the more obvious errors found in a PCB project

You then can decide what to change and what you want to keep as-is with one click.

Error Finder in-use
Just click accept or reject to quickly QC your dataset

We see this utterly new human-machine workflow as being something of a “quiet revolution” in the vision AI space as you will no longer have to spend time finding errors in your data; simply fix them. This new workflow can save organizations working on vision AI projects an enormous amount of time and keep budgets in line.

To give you a baseline number, we compared Error Finder with the gold-standard technique of today, consensus scoring. What we found was that our approach could save smaller scale projects up to 15x, and larger scale projects up to 33x.

It’s exciting. It’s available for all. Learn more about what it is and how you can use it by going here.

AI assistants status

Some of the most common questions we get concern the status and training of our AI assistant models. We’re the first to admit this has been a bit of a black box, with users asking us questions like:

Now, you can answer these questions yourself. With our new AI assistants status page, you can see how models improve over time to get an idea of how you are progressing towards annotation automation.

AI-status multiple
Here, you can see the status of all models available in Hasty

You can also see what is needed to train the next model and the current status of your model(s).

AI-status multiple
Now you can see how models change over time, and if they are improving

Next up is to add the same functionality to custom models created in Model Playground so that you can see how more data helps with your models.

For more information, feel free to check out our docs.

Model Playground grows up

First, a big thank you to all the beta testers we’ve had for Model Playground. With your feedback, we’ve been able to push what we offer in terms of model building and testing and are getting closer to releasing our model building and experimentation functionality to the rest of the world.

In our latest update, we’ve added a host of new visualizations and plots to see how your new model experiment is performing and how it compares with other experiments. We also added many, many new solvers and augmentations.

New visualizations and plots

Best performing table and metric overview

Best performing table

Metric Overview

See which of your models are performing better in terms of your primary metric(s) and in terms of inference speed with our “Best performing” widget and get an overview of how different experiments compare with our new “Metric overview” table.

Running time and GPU consumption


Get a better understanding of hardware consumption with our running time and GPU consumption widgets.

Hyperparameter comparison


Check what differences exist between different experiments using our hyperparameter comparison and so that you can quickly figure out which parameter is essential for you.

Confusion matrix and classification prediction visualization


Using this, we figured out that the model struggled with diode, resistor, and capacitor classes as they were all fairly similar


Visually still a bit raw, these two widgets were created at the request of one of our customers so that they could get a better idea of how their classification model was progressing.

First, you have the confusion matrix that will tell you what classes the annotators’ struggled with and what the model thinks it should be instead. This can be very helpful as you can see for which classes the model needs more data, and where it struggles.

Secondly, we have the classification prediction visualization which can be used to do a further deep-dive on your data to figure out what the model is seeing. In tandem, they can help you find issues with your data and figure out how to improve your model.

We are also working on adding the same widgets for other types of annotation in the near future.

Additionally, we’ve added



Inference monitoring

With users successfully training models in Hasty, more and more users are using our Inference engine API (link). Although still early days, it’s encouraging to see users getting models developed in Hasty into production with minimum amounts of coding. However, one successful feature leads to additional feature requests. We are currently building out the monitoring tools for our inference engine to give you insight into what the model sees on production data.

It’s still early days, but we have built a first interface so that you can see what our model sees.

Using inference monitoring, here we can quickly see that us setting max predictions to 10 was a mistake

We will continue to extend our monitoring functionality and flesh it out in the coming weeks and months - but to do so correctly, we need your help. If you are interested in using Hasty’s inference engine and monitoring and are willing to be a beta user and give us feedback and what you like, would change, and find missing - both with the engine itself and with our monitoring solution - email me at alex(at)

Shameless plug time

Only 13% of vision AI projects make it to production. With Hasty, we boost that number to 100%.
Our comprehensive vision AI platform is the only one you need to go from raw data to a production-ready model. We can help you with:

All the data and models you create always belong to you and can be exported and used outside of Hasty at any given time entirely for free.

You can try Hasty by signing up for free here. If you are looking for additional services like help with ML engineering, we also offer that. Check out our service offerings here to learn more about how we can help.

Keep reading

Get to production reliably.

Hasty is a unified agile ML platform for your entire Vision AI pipeline — with minimal integration effort for you.

Start for free Check out our services