Annotations get mismatched when exported

We got some annotations labelled using your platform, on the UI the annotations are right. However, once I export it using Yolov8 format and start training and once I visualize the labels they are quite off. I have not used any augmentation while creating the dataset on roboflow. I am not able to understand why is that the case, the annotations are normalized as well.
Can I know what’s going wrong?

Also, some of the masks are patchy, not sure why thats happening, there are white spaces here and there.

project type:Segmentation

I’m sorry to hear that you’re experiencing issues with your annotations. Based on the information provided in the extracts, there could be a few things to check:

  1. Verify your annotations: Ensure that none of the annotations are out of frame in the images. This is a crucial step in preparing your data to train YOLOv3 and should also apply to YOLOv8. source

  2. Check the EXIF orientation of your images: Your images might be stored on disk differently than how you view them in applications. This could potentially cause issues with the positioning of your annotations. source

  3. Ensure correct resizing of images and updating image annotations to match: If your images have been resized, it’s important to ensure that the annotations have been updated to match the new image sizes. source

  4. Check the formatting of your annotations: The annotations should match the requirements of your model’s inputs. For instance, generating TFRecords for TensorFlow or a flat text file for some implementations of YOLO. source

If you’ve checked all of these and are still experiencing issues, it might be helpful to reach out to Roboflow’s support for further assistance.


Hi! I’m Roboflow’s AI powered bot. This is an automated response powered using AI with knowledge from our docs, blog, knowledge base, and previous forum responses. If you find this to be a helpful answer, please mark it as the solution using the checkbox icon below. If this doesn’t solve your issue, please follow up with what you’ve tried/why, and the community will continue to chime in as usual.

I have checked all of the above, and I am still experiencing this issue.

1 Like

Hi Karthik, would mind granting help@roboflow.com access to your workspace? I can take a deeper look.

@Karthik_Datta Glad we were able to get this issue resolved in email. Welcome to the Starter Plan!

:white_check_mark: Resolved. For visibility:

When data is annotated in Roboflow and contains a mix of polygons and bounding boxes, Roboflow provides the annotation data as-is. Some annotations may have four coordinates (for a box) and others may have multiple points (for a polygon). This is why when you export the data and visualize it, the annotations are in the places you expect them to be in your images.

The framework you were training with treats mixed (bbox/polygon) annotations to modify labels like those your original post visualizes.

There’s a few solutions:
1. Hosted training. Training on hosted GPUs in-app will respect the mixed annotations to not modify their labels. (In the example we solved here, the mAP went from ~2% to 40% on the seven image dataset because the labels were passed to the model as you intended.)
2. Train with a framework and format that does not modify annotations. If you’re training on your own GPU/Colab, you can use a library like native PyTorch that accepts COCO JSON with mixed bounding boxes and polygons without modifying your annotations.
3. Export data and force masks. If you’re training on your own GPU/Colab with a library that modifies mixed annotations in your example, you can force the library to treat all the annotations as masks or all of them as bounding boxes.

e.g., using the Roboflow Ultralytics YOLOv8 instance segmentation notebook example, you can load your data and use supervision to force the ultralytics library to treat all data as masks.

Load data

!pip install roboflow --quiet
from roboflow import Roboflow
rf = Roboflow(api_key="") # Replace with your key

Force mixed bboxes and polygons as masks

import supervision as sv

# grab our data
project = rf.workspace("WORKSPACE-NAME").project("PROJECT-NAME") # Replace with your workspace and project names
dataset = project.version(NUMBER).download("yolov8") # Replace with your version number

# for each image, load YOLO annotations and require mask format for each
for subset in ["train", "test", "valid"]:
    ds = sv.DetectionDataset.from_yolo(
        images_directory_path=f"{dataset.location}/{subset}/images",
        annotations_directory_path=f"{dataset.location}/{subset}/labels",
        data_yaml_path=f"{dataset.location}/data.yaml",
        force_masks=True
    )
    ds.as_yolo(annotations_directory_path=f"{dataset.location}/{subset}/labels")

Hope this helps others, too!

1 Like

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.