Misaligned bounding boxes after augmentation

@Mohamed I seem to be having a similar issue with an object detection dataset. It was fully labelled in RoboFlow and the labels look fine there. But, when I expect the images and labels to do some training with yolov7, the bounding boxes no longer match what was on RoboFlow. Auto-Orient doesn’t seem to fix the issue for me. Appreciate your support resolving this. Thanks!

Auto-Orient: Applied

Outputs per training example: 3
Flip: Horizontal
Crop: 0% Minimum Zoom, 25% Maximum Zoom
Rotation: Between -10° and +10°
Shear: ±10° Horizontal, ±10° Vertical
Hue: Between -15° and +15°
Saturation: Between -15% and +15%
Brightness: Between -25% and +25%
Blur: Up to 2px
Noise: Up to 1% of pixels

1 Like

I am still having this issue for an object detection dataset using augmentation. Any tips? Thanks

1 Like

Has this been solved? I’ve noticed my object detection dataset + augmentation produces different results outside of roboflow (boxes are not correct).

Has this been resolved? I am having the same issue. I tried adding the Auto Orient pre-processing step but same issue.

@Gabriel_Aversano

I’ve merged all your replies since they were regarding the same issue. Did you try generating a version without auto orient?

Hi yes that is how I did it originally. Then, based on suggestions from the forum, I tried with auto-orient but the issuer persists.

Can you share the ID of your workspace and project, or a link to the project on Universe?

@stellasphere
BERKELEY LAB, SORMA v4 and v5

Hi @Gabriel_Aversano

Apologizes for the late response. As a troubleshooting step: can you try applying Auto-Orient with a resizing option to see if you still experience the issues you are referring to?

1 Like

Hey @leo this continues to be a problem.

I’ve tried to add the resizing option for v6.

Preprocessing

Auto-Orient: Applied
Resize: Stretch to 640x640

Augmentations

Outputs per training example: 3
Flip: Horizontal
Crop: 0% Minimum Zoom, 25% Maximum Zoom
Rotation: Between -10° and +10°
Shear: ±10° Horizontal, ±10° Vertical
Hue: Between -15° and +15°
Saturation: Between -15% and +15%
Brightness: Between -25% and +25%
Blur: Up to 2px
Noise: Up to 1% of pixels

Bounding boxes look find on roboflow but then when I export them there are issues on some of the images. Here are a couple examples including the augmentations that were applied.

image_2022-09-12T145316-417781-0700.png
Auto-Orient Applied
Resize Stretch to 640x640

image_2022-09-12T145441-399249-0700.png
Auto-Orient Applied
Crop Keep 87%
Centered on 54%, 58%
Resize Stretch to 640x640
Flip Horizontal
Rotation 3°
Shear X: -5°
Y: 4°
Hue -13°
Saturation 1%
Brightness -7%
Blur 1px
Noise 0.25%

Any other ideas?

Hi @Gabriel_Aversano

Sorry to hear you’re still having issues. Could you screenshot what you mean when you say the bounding boxes no longer match what is on Roboflow? Where are you seeing these mismatched boxes? Is it possible you exported in a format that is not compatible with what you are using it for?

Additionally, if you are using the datasets with a particular notebook from us, please link it so we can help troubleshoot better.

Hey @leo

When I view the labeled images on Roboflow they look fine (i.e., the bounding boxes are in the correct location). I exported the labels in yolov7 format and am viewing them on my machine with custom code that reads the label text files and draws the bounding boxes on the images.

The labels for the test dataset are fine but some of the labels for the train dataset are incorrect (i.e., bounding boxes are not in the correct location). This problem only started happening when I added data augmentation steps. The other datasets in my project (like v2) are fine.

It is possible that there is a bug in my own code. However, given that the issue only happens when I add the data augmentation steps, it is possible there is something going on with the export from Roboflow. Moreover, I first realized this problem when using the labeled dataset (with data augmentation) for training a custom yolov7 model using the official yolov7 code. The code automatically spits out some of the training images with the bounding boxes. The incorrect locations were observed here as well leading me to believe something went wrong in the export.

Does Roboflow do anything under the hood to display the bounding boxes that might not be applied when exporting? I have tried with and without auto-orient and resizing.

Here are a couple examples of two images from the training dataset using my own code. In one example, the labels are fine, in the other, they are not. You can see that there isn’t any reflections being applied to these images.

Hi @Gabriel_Aversano

It’s interesting that you say this issue only occurs when data augmentation is applied. Is the issue consistent whenever there are any augmentation steps?

Just to double check, YOLOv7’s annotation format uses the following format:
class_id center_x center_y width height

With the measurements being normalized from 0-1. (x: x/image width, y: y/image height, width: width/image width, height: height/image height)

If the issue persists, would it be okay to duplicate your project to troubleshoot this?

Hey @leo,

I haven’t gone through the process of using 1 augmentation step at a time to figure out which one is causing issues but I can confirm that the dataset without augmentation works fine (version 2).

Yes, that is the correct format for yolov7.

Yeah it would be great if you could duplicate the project, export the labels on the training dataset, and let me know if the bounding boxes match or not.

Thank you very much for your support!

Hi @Gabriel_Aversano

I was able to download an augmented dataset from your project and did not see any issues with the annotated images.

Since it’s a private project, I won’t share the image here, but I ran it through a YOLOv7 training notebook and it produced properly labeled images.

Hey @leo thanks for checking it out. I redownloaded the dataset and it seems to be working on my end too. Thank you very much!

1 Like