Sometimes even AI solutions can underperform, and it is a pain Data Science and R&D teams try to evade or overcome daily. Nobody is safe in terms of a neural network occasionally showing a bad result, and AI teams should not be afraid of that fact and know what to do if such a situation occurs.

In Hasty, we highly rely on our AI-powered features such as AI Assistants and AI Consensus Scoring to deliver real value to you. Still, we are also in the same zone as every other AI team - our solutions might not out-of-the-box perform as intended. It can happen for various reasons, but using our vast experience in the field, we identified the possible causes. On this page, we want to share them with you, talk about how they can affect the AI Consensus Scoring performance, and let you know the best steps to overcome the problem if it occurs.

Let’s cut the crap and keep the conversation straight here. The performance of any AI solution is as good as the data it was trained on. When talking about AI Consensus Scoring, the data it uses as a training set is your data. We are sorry to address the elephant in the room, but you have probably come to this page after getting a notification that the AI CS run did not complete because the underlying Machine Learning metrics were not good enough. Right now, you are likely thinking of how you can improve the metrics to get a run done. Let’s check out the possible steps.

Well, our AI features are strongly tied with one another, so it is reasonable to check the performance of the AI Assistants first. If they work fine, AI Consensus Scoring should work as well. If you experience some difficulties with AI CS and the AI Assistants work fine - please reach out to us.

However, if AI Assistants produce weird results, AI CS will likely underperform. There are two potential reasons for the second case:

  • You might simply not have enough data in your run for AI algorithms to adequately train;
  • Your data might be too noisy.

The fixes are relatively simple in both cases - add more data or use the Manual Review feature to clean up bad data. Let’s break both of these cases down.

As you might know, in many tasks, AI models need a high amount of high-quality data as a training set to have a chance for a good performance. In vision AI, such a demand is more than relevant. You must have heard of the massive CV benchmark datasets such as ImageNet and COCO consisting of hundreds of thousands of images. Now imagine a complex AI solution such as AI Consensus Scoring that focuses on finding the potential mistakes in your data. Both the task and the algorithms underneath AI CS are very difficult even compared to other AI tasks and neural network architectures.

So, our primary goal as users should be to give AI CS all the possible resources for the algorithms to perform. Suppose you give it a small amount of high-quality data. In this case, the feature will likely underperform as it will not have enough pictures and/or annotations to train on (for example, if you have 100 images, 20 different classes, and only 10 annotations per class). Therefore, if you see a notification telling you that the AI CS run did not complete - please check out the quantity of data you passed to the run. We can not tell you the exact number of images or annotations AI CS needs to perform well as it strongly depends on a vision AI task and the case you are trying to solve. Still, we can ask you to add as much data as possible because it is the most likely blocker for the feature.

In some cases, you might pass many images to the run but still receive a notification. In such a situation, we recommend you think if your data is clean enough for any AI algorithm to be trained on. In Data Science, another vital aspect besides the data quantity is its quality. If the quality of the annotations is poor, AI models will not be able to perform well. Sure, they will output something, but their generalization and prediction power will be questionable.

Therefore, if you pass noisy data to AI CS and hope that the solution will somehow sort everything out, you are mistaken. AI CS, just as any other AI solution, needs data of a good quality to train adequately. So, if you can describe your data as a noisy one, please use the Hasty Manual Review feature to clean it up before scheduling AI CS to run.

We are always trying to improve our features, so please do not wait and get in touch if you are sure that both data quality and quantity are on the board. We are happy to help you with troubleshooting.

Boost model performance quickly with AI-powered labeling and 100% QA.

Learn more
Last modified