NEW
CloudFactory launches Accelerated Annotation after acquiring Hasty.ai
17.01.2022 — Mihail

Using the Hasty Inference Engine API

As part of our (link: https://hasty.ai/blog/building-the-data-flywheel-to-overcome-data-shift/ text: data flywheel), you can get inference results from the model you trained using the (link: https://docs.hasty.ai/ text: Hasty API). We provide a way to upload an image and receive predictions. This allows you to integrate Hasty into your product as the predictions backend effortlessly.

Using the Hasty Inference Engine API

To demonstrate this process, we will build a simple web app that, first, prompts the user to upload an image and then displays the predictions. This flow can be extrapolated to building anything you could imagine, from the meme hot dog-identifying app to a defect detection app for your business.

Here, we’re using our standard inference engine. If you’re interested in real-time inference, please contact our sales team.

For the scope of this exercise, we will skip building the backend API and go for using the API key directly from the front-end. Avoid using the API key in client code as it is unsafe.

Acquiring the API key and the project id

Navigate to the workspace that contains your project, head to “API accounts,” and generate a key. Copy it and save it somewhere in a secure manner. We won’t be able to display it again.

Then, either navigate to your project and copy the project ID from the address bar or grab it from the project list endpoint response.

App setup

We are going to use React to build the interface, as well as node.js to simulate the backend. Since the API blocks requests originating from unknown domains, we will use a thin proxy to circumvent the CORS checks (don’t do this in production). In a real-life scenario, the calls from your app would reach your own API that would then call the Hasty API and pass the key along.

We define the base path for the API calls in a way that would forward the calls through the proxy:

const API_BASE_URL = `http://localhost:8080/https://api.hasty.ai`;

We provide two text fields in the UI that you can use to paste your API key and the project id. Alternatively, you can hardcode these values for now:

const API_KEY = "<key>";
const PROJECT_ID = "<id>";

const [key, setKey] = useState(API_KEY);
const [projectId, setProjectId] = useState(PROJECT_ID);

We should also prepare the headers for our requests. We need to send the key for the API to authorize us.

const headers = {
   "X-Api-Key": key,
   "content-type": "application/json",
 };

Loading the model

Unless your model is currently actively in use, it will not be in a loaded state. Let’s pick the instance segmentor model as an example. To get the model status, we need to hit the corresponding endpoint. UI-wise, we will represent it as a button that triggers the model status check and blocks any uploads until the model is loaded.

In a real-life scenario, your backend would need to check this as well before requesting any inferences. The handler for this button goes as follows:

const handleModelStatusCheck = async () => {
   setModelStatus("Checking ...");
   const url = `${API_BASE_URL}/v1/projects/${projectId}/instance_segmentor`;
   try {
     const response = await fetch(url, { headers });
     const json = await response.json();
     setModelStatus(json.status || "Error");
     setModelId(json.model_id || null);
   } catch (e) {
     setModelStatus("Error");
   }
 };

After getting a response from the model, we store the status (LOADED, LOADING) and the model ID if the model is loaded. We will need the model ID for later to query for the inferences.

Note: if the model is not loaded, this request will trigger the loading process. It may take up to several minutes.

Uploading the image

Now that we have our model ID, it’s time to upload the image. This step is the most complex, as we are using signed urls for our cloud storage.

Don’t worry; we do not write production code at Hasty like this. Let’s go step by step.

We are going to use an endpoint that will generate the signed urls for us. It accepts a parameter that describes the number of signed urls we require. In our case, it will be just one:

const signedUrlsUrl = `${API_BASE_URL}/v1/projects/${projectId}/image_uploads`;
    const signedUrlsResponse = await fetch(signedUrlsUrl, {
       headers,
       method: "POST",
       body: JSON.stringify({ count: 1 }),
    });
const urlsJson = await signedUrlsResponse.json();

Next, we will use that signed url to actually send the image to the bucket:

const { id, url } = urlsJson.items[0];
       const data = await readAsArrayBuffer(e.target.files[0]);

       await fetch(`http://localhost:8080/${url}`, {
         body: data,
         method: "PUT",
         headers: {
           "content-type": "image/*",
         },
       });

We will need to display our image in the UI, so we save the data URL to it and the image sizes. We will need those to display the inferences on top of the image.

const imageUrl = await readAsDataURL(e.target.files[0]);
       const imageSizes = await getImageSize(e.target.files[0]);
 setImageUrl(imageUrl);
       setImageSizes(imageSizes);

Now, let’s get the inferences using this endpoint:

const response = await fetch(
         `${API_BASE_URL}/v1/projects/${projectId}/instance_segmentation`,
         {
           headers,
           method: "POST",
           body: JSON.stringify({
             confidence_threshold: 0.5,
             max_detections_per_image: 10,
             model_id: modelId,
             upload_id: id,
           }),
         }
       );

       const json = await response.json();
       setLables(
         json.map((inf) => ({
           ...inf,
           x: inf.bbox[0],
           width: inf.bbox[2] - inf.bbox[0],
           y: inf.bbox[1],
           height: inf.bbox[3] - inf.bbox[1],
         }))
       );
     }

As you can see, the endpoint requires the model ID and the ID of the signed URL. It uses it to get the image contents in the backend.
The response contains the generated masks. For simplicity’s sake, we will only display their bounding box, hence the need to calculate the width and height of each inference.

Finally, we will pass this data to our hastily put together CardMediaWithAnnotations component that does some advanced mathematical calculations to display the bounding boxes on the image.

Note: we do not handle EXIF rotation in this demo, meaning that the browser might read the EXIF headers and rotate the image accordingly, while the backend operates on the image’s binary data. You might get inferences that are displayed at an angle.

End result

Now, we store these images, and you can access them in the project’s Inference Monitoring section. There, you can delete them or upload some of them to one of your datasets if it highlights a case that your model isn’t handling perfectly yet.

You can find the code for this example at github.com/hasty-ai/inference-upload-example or check it out live at demo.hasty.ai(bring your own key!)

Shameless plug time

Only 13% of vision AI projects make it to production. With Hasty, we boost that number to 100%.
Our comprehensive vision AI platform is the only one you need to go from raw data to a production-ready model. We can help you with:

All the data and models you create always belong to you and can be exported and used outside of Hasty at any given time entirely for free.

You can try Hasty by signing up for free here. If you are looking for additional services like help with ML engineering, we also offer that. Check out our service offerings here to learn more about how we can help.

Keep reading

Removing the risk from vision AI.

Only 13% of vision AI projects make it to production, with Hasty we boost that number to 100%.