Imagine having an Object Detection dataset with the annotations in the COCO format. How would you import annotations in such a case? Well, you can simply follow the steps mentioned on the Import annotations (Beta) page. Let's see how it works.

First, you need to create your Hasty project and upload images to it. These steps are straightforward and should not cause any trouble.

Second, you must create the label classes and attributes you will use in your project. Simply follow the guide from the documentation, and it will take you seconds to do. In our case, we have 404 images of Simpsons characters. The classes are the characters themselves - Homer, Marge, Maggie, Lisa, and Bart.

The COCO annotations for the dataset look as follows:

json
      "annotations": [
    {
      "id": 1,
      "category_id": 5,
      "iscrowd": 0,
      "segmentation": [
        [
          195.17625231910944, 343.74768089053794, 183.59925788497213,
          313.0241187384044, 204.52690166975873, 289.8701298701298,
          242.3747680890537, 277.8478664192949, 265.5287569573283,
          278.2931354359925, 267.7551020408162, 288.53432282003706,
          279.3320964749535, 301.0018552875695] 
        ]
      ],
      "image_id": 1,
      "area": 0,
      "bbox": [
        177.81076066790348, 277.8478664192949, 104.19294990723563,
        200.81632653061223
      ]
    },.....]
    

The only key in COCO annotations that is important for us is the "annotation" key. In Hasty JSON v1.1 format, all the annotation information (polygons, bounding boxes, classes, etc.) for each image is stored in the "labels" list inside each "images" instance. Filling the "labels" list with the corresponding annotations given in the COCO format is the key component for successful Import annotations (Beta) feature usage.

So, the third step is to export your project in the Hasty JSON v1.1 format through the Export data feature. This way, you will get a file in a necessary format that can be expanded with COCO annotations.

To export the project, just click on the three dots present in the bottom right corner of the project and click on the Export data button. By clicking the notification bell on the upper right side of the screen, you can download the result of your export session. In the ZIP archive, there will be a JSON file containing the information about your project.

This is what each instance in the "images" dictionary should look like:

json
      {
      "image_id": "08ec963a-5813-4a82-a111-fa34e4999479",
      "image_name": "bart_simpson-pic_0001.jpg",
      "dataset_name": "Default",
      "width": 288,
      "height": 416,
      "image_status": "NEW",
      "tags": [],
      "labels": []
    },
    

Please notice that the "labels" list is empty because there are no annotations in the project so far. The goal is to fill the "labels" list for each image using the COCO annotations. You need to do some coding to copy the annotations from one file to another to succeed. But do not worry. We have your back covered.

As a programming language, we suggest using Python. Let's start by loading the COCO annotations file.

python
      f = open("instances.json") # instances.json is the COCO annotations file
dat = json.load(f)
f.close()
    

The next step is to extract the annotation ids and image ids associated with each of the annotations from the imported file.

python
      annotations_id = [dat["annotations"][i]["id"]
                  for i in range(len(dat["annotations"]))]
img_id = [dat["annotations"][i]["image_id"]
          for i in range(len(dat["annotations"]))]
    

Then, we suggest creating a dictionary that stores the relevant information about the specific annotation for each image.

python
      temp = defaultdict(list)
for delvt, pin in zip(img_id, annotations_id):
    temp[delvt].append(pin)

# **temp** contains a dictionary that associates the images with their annotations
# the key is the image id and the values are the annotations id.
    

With that done, you can now store the annotations for each image in a format that Hasty will accept.

python
      annot_each = []
allannot = []
ann = dict()
for keys in temp:
    for a in temp[keys]:
        ann["polygon"] = restructure_polygon(
            dat["annotations"][a-1]["segmentation"])
        ann["polygon"] = ann["polygon"].tolist()
        ann["class_name"] = cat[dat["annotations"][a-1]["category_id"]]
        annot_each.append(ann)

    allannot.append(annot_each)
    annot_each = []
    ann = {}
    

In the code block above, we create a new instance with the "polygon" and "class_name" field in the "labels" list. Notice that the polygon structure is different in Hasty and COCO. The COCO format has the following structure of polygon:

json
      [165.59554730983302, 409.3358070500928, 157.84786641929497,395.9925788497218]
    

Whereas Hasty wants the polygon to look like this:

json
      [[165.59554730983302, 409.3358070500928], [157.84786641929497,395.9925788497218]]
    

The following code does the job:

python
      def restructure_polygon(polygon):
    polygon = np.array(polygon)
    reshaped_pol = np.reshape(polygon, (-1, 2))
    return reshaped_pol
    

The class name is extracted from a dictionary called "cat" that contains the indices with the character's name.

python
      cat = dict()
cat[1] = "Homer"
cat[2] = "Marge"
cat[3] = "Maggie"
cat[4] = "Lisa"
cat[5] = "Bart""
    

At this point, the only task remaining is to add the annotations to the exported file.

python
      filename = "simpsons_export.json"
with open(filename, 'r')as f:
    data = json.load(f)
    i = 0
    for vals in data["images"]:
        vals["labels"] = allannot[i]
        i = i+1
filename2 = "annotated_simpson.json"
with open(filename2, 'w') as f:
    json.dump(data, f, indent=4)
    

We iterate through the empty labels of our original file, "simpsons_export.json", and replace them with annotations present in the allannot variable. Then we write this data in a new JSON file called "annotated_simpson.json".

The entire code looks like this:

python
      import json
import numpy as np
from collections import defaultdict


def restructure_polygon(polygon):
    polygon = np.array(polygon)
    reshaped_pol = np.reshape(polygon, (-1, 2))
    return reshaped_pol


f = open("instances.json")
dat = json.load(f)
f.close()

cat = dict()
cat[1] = "Homer"
cat[2] = "Marge"
cat[3] = "Maggie"
cat[4] = "Lisa"
cat[5] = "Bart"

annotations_id = [dat["annotations"][i]["id"]
                  for i in range(len(dat["annotations"]))]
img_id = [dat["annotations"][i]["image_id"]
          for i in range(len(dat["annotations"]))]


temp = defaultdict(list)

for delvt, pin in zip(img_id, annotations_id):
    temp[delvt].append(pin)
annot_each = []
allannot = []
ann = dict()
for keys in temp:
    for a in temp[keys]:
        ann["polygon"] = restructure_polygon(
            dat["annotations"][a-1]["segmentation"])
        ann["polygon"] = ann["polygon"].tolist()
        ann["class_name"] = cat[dat["annotations"][a-1]["category_id"]]
        annot_each.append(ann)

    allannot.append(annot_each)
    annot_each = []
    ann = {}
filename = "simpsons_export.json"
with open(filename, 'r')as f:
    data = json.load(f)
    i = 0
    for vals in data["images"]:
        vals["labels"] = allannot[i]
        i = i+1
filename2 = "annotated_simpson.json"
with open(filename2, 'w') as f:
    json.dump(data, f, indent=4)
    

Excellent, it is time to import the annotations!

Go to the project dashboard and click on the Import annotations (Beta) button.

Then create a new import and upload the expanded file in the Hasty JSON v1.1 format.

In our case, we named the import "Simpsons_labels". After the file is uploaded and checked, the import job will appear in the Import Jobs.

When the import is complete, the annotations will show up in the annotation environment.

Awesome, the labels are at their place! Also, please notice that all the annotated images are marked as "Done". This is the default setting for the import jobs but you can change it if you feel like it is needed.

Accelerated Annotation.
Maximize model performance quickly with AI-powered labeling and 100% QA.

Learn more
Last modified