Annotations Displaced after downloading dataset (COCO JSON)

Hello Community!

I am working on an object detect project and need to create a ground truth annotation. I used Roboflow to do this, however, when I download the dataset using COCO format I realised that the annotations/bounding boxes are displaced. This is causing a significant issue regarding the precision, recall and f1 score while deploying a model. I have attached the screenshot of the image when running on python.

The image is as below when using python. We can see the significance differences in the bounding box and the objects it’s detecting (not so accurately) . Second image is from Roboflow.

The Python code used:

Setting the image path

image_path = ‘xxxxxx/downloads/Od123/valid/0000255_jpg.rf.63fd9d6b07e3e30f344aa710bd322ac1.jpg’

Load ground truth annotations (COCO format JSON)

with open(‘/xxxxxxxxx/downloads/Od123/valid/_annotations.coco.json’, ‘r’) as f:
coco_gt = json.load(f)

Extract image filename without extension

image_filename = image_path.split(‘/’)[-1].split(‘.’)[0]

Find annotations for the specific image filename

gt_annotations = [ann for ann in coco_gt[‘annotations’] if coco_gt[‘images’][ann[‘image_id’]][‘file_name’].split(‘.’)[0] == image_filename]

Read the image

frame = cv2.imread(image_path)
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

Draw the image with ground truth bounding boxes

for gt_annotation in gt_annotations:
bbox = gt_annotation[‘bbox’]
x, y, w, h = map(int, bbox)
cv2.rectangle(frame_rgb, (x, y), (x + w, y + h), (0, 255, 0), 2) # Green color for ground truth

Display the image with only ground truth bounding boxes

cv2.imshow(‘Ground Truth Bounding Boxes’, frame_rgb)
cv2.waitKey(0)
cv2.destroyAllWindows()

Has anyone experienced a similar problem? Would really appreciate any help.

Thank you!

I have the same problem with any dataset i’m downloading, including the one I created.
The annotations seems correctly placed on Roboflow’s viewer, but when downloading the dataset (any Format!) I see that all annotations has a X and Y offset of about 4% the original placement.
This is annoying, i’m forced to build my own annotator.

@Ben_Kovach @ihsan_Seker
Okay, there’s a good chance I’m misunderstanding the problem, but I see two things here.

1: If the issue was that the first image found many false positives that would mean that the model confidence is set to detect too low, showing many things the model thinks could be a class even if they truly aren’t, which is why confidence is a percentage value. In the second image, there are a few that weren’t detected, which means the confidence is either set too low, the model isn’t trained well enough to detect anomalies such as small groups of people, or a combination of both.

2: You mentioned that the bounding boxes were displaced slightly, I can’t say that I see any displacement but if there were I would guess that the image is the same however the two images you sent are different resolutions, maybe resolution scaling shifted the boxes a little?

Am I understanding the problem correctly?

to clarify, the problem I described has nothing to do with training, or model results/confidence.
I’m referring to a dataset problem, or Roboflow’s Dataset Export feature problem.

  1. Choose a dataset with annotations, view correct positions on Roboflow’s viewer.
  2. Export dataset to a zip file. (I tested Yolo8 and CreateML formats)
  3. Using a script I wrote - for each image in the dataset, draw a rectangle for each annotation according to exported data file (json)
    I have verified and tested the validity of my script with other datasets (not Roboflow). the script works perfectly with Yolo8 and CreateML datasets exported or created using other tools.
  4. The resulting bounding boxes are all misplaced with some deviation - Origin (x,y) + 4% to 6%, not constant and seem to be affected by original image size.

I first thought this has something to do with the model preprocessing phase Image Resize or Auto-Orient. So I created my own dataset (Sign in to Roboflow) where I generated a dataset without these steps. but after exporting this dataset, I see the same problem, even while exported images are in their original sizes and no auto-orient step was performed.

Thanks @Ben_Kovach and @ihsan_Seker for raising this concern. Do you mind sharing image examples which more clearly show the error? If I understand, the difference between the two images shared by Ihsan is caused by model accuracy (rather than the export error being discussed).

I tried to replicate the steps noted above (export in COCO json and compare Roboflow UI annotations vs actual annotations). However, if there is a difference, I can’t see it.

Here is the original image inside Roboflow:

Here is the result of exporting the image and running Ihsan’s script (not sure why the colors are being read incorrectly):

Jumping in here, the bounding box displacement may be due to auto-orient being off when generating your dataset, you can read more about what that does on our blog post.

To solve your problem, generate a new dataset version in your project and turn on the Auto-Orient preprocessing step. This should solve the image displacement problem when exporting the dataset.