All annotation is now free in Hasty.

TorchScript Sample Inference Scripts

In the following pages we provide sample scripts which can be used to run TorchScript models in python. Please keep in mind that these models can also be run in C++ using the TorchScript API.

Please also note that if you require smaller models, faster models, or models made specifically for mobile devices, you may want to go back to model playground, and choose different architectures, use smaller images, lower model parameters etc to optimize runtime and/or memory usage as needed.

If you note a discrepancy between the metrics reported in model playground and from your deployed model, it is entirely possible you are not using the correct image transforms.

We recommend looking at the "config.yaml" file to see the transforms you used for validation/testing, and using the excellent albumentations library which provides almost all of them. Details for missing ones will be provided shortly in the [Hasty visionAI wiki](/content-hub/mp-wiki

Please note that you have to replicate/implement them if you are deploying to an environment where albumentations is not available. You can read on using and building the transformations on this page.

Last updated on Jun 01, 2022

Removing the risk from vision AI.

Only 13% of vision AI projects make it to production, with Hasty we boost that number to 100%.