Incorrectly Exporting Annotations

Hi there, I am experiencing an issue when exporting my dataset to train it. I acquired this dataset from a friend who used it successfully to train a model via YOLOv8. However, I want to use this dataset to train a model via YOLOv6-N.

I uploaded the dataset to Roboflow and began annotating some images because they were missing annotations. I exported the dataset, and then ran it through a Python script (“pyscriptA”) that looks at the labels and if they contain more than 5 attributes, trims off any excess attributes and keep the first 5 (class_id center_x center_y width height).

Example of pyscriptA:
Before:

0 0.65625 0.4375 0.38125 0.4375 0.38125 0.6515624999999999 0.65625 0.6515624999999999 0.65625 0.4375

After:

0 0.65625 0.4375 0.38125 0.4375

I then began training the model, and noticed that on my 30th epoch out of 300 that there were still 0 detections within the batch. This concerned me, so I investigated and developed a Python script (“pyscriptB”) that uses matplotlib to draw rectangles around random images in my train dataset.

Excerpt of pyscriptB:

image = cv2.imread(image_path)
    image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
    height, width, _ = image.shape #obtain image dimensions

    with open(label_path, "r") as f:
        for line in f.readlines():
            parts = line.strip().split()
            if len(parts) < 5:
                continue
            
            #math used to draw rectangle
            class_id, x_center, y_center, w, h = map(float, parts)
            x1 = int((x_center - w / 2) * width)
            y1 = int((y_center - h / 2) * height)
            x2 = int((x_center + w / 2) * width)
            y2 = int((y_center + h / 2) * height)

            #draw rectangle
            cv2.rectangle(image, (x1, y1), (x2, y2), (255, 0, 0), 2)
            cv2.putText(image, f"Class {int(class_id)}", (x1, y1 - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 0), 1)

This is the output I get for a random image. As you can see, it sucks.

However, this is how Roboflow’s Annotation Tool displays the same image/label.

I set the images to resize to 640x352 for my application of this model, but I even tried it with resizing to 640x640 and without resizing and it still incorrectly draws the rectangle.

I think it’s fair to mention that some images/labels within the train dataset are correctly drawing the rectangles in pyscriptA, but the pattern that I noticed is that the incorrectly drawn rectangles are on the ones that had existing labels that I got from my friend (typically more than 5 attributes), and the correctly drawn rectangles are the ones that I annotated myself in Roboflow (only had 5 attributes).

This shouldn’t be an issue though because if Roboflow is drawing the rectangles correctly on the existing images/labels, then why isn’t my pyscriptB.

  • Project Type: Object detection
  • Operating System & Browser: Windows 10 & Google Chrome
  • Project Universe Link or Workspace/Project ID: dronev6-7np3p

Hi there, it looks like the original annotations are formatted class_id point_0_x point_0_y point_1_x point_1_y … Roboflow detects that and converts internally to yolo object detection standard. But we don’t modify your original annotations on export.

You can probably fix by updating your pyscriptA to instead treat those points as the four corners, and find center and w/h!

Hi Lake, solved! I made a python script that runs through every label txt file and treats the class_id center_x center_y width height as class_id point_0_x point_0_y point_1_x point_1_y point_2_x point_2_y point_3_x point_3_y and that worked perfectly.

Thanks!

Really glad to help!

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.