When you get a working model, export it and deploy it wherever you want.
Most of the teams we meet either use PyTorch or TensorFlow. We support model exports in both formats so that whatever your tech stack is, you can use us.
Using a cloud-based inference engine service has its advantages, but what will always be true is that you will pay for the privilege. By exporting your models, you can control your costs and optimize according to need.
Are you looking to get your model ready for production? In Model Playground, you can quantize and prune your models to minimize the footprint while getting the same great results.
Dr. Alexander Roth