The annotations seem to be scattered all over the place yolov8

  1. Hello, I’ve encountered a problem. I am a deep learning beginner, and I am using Roboflow for data annotation. I am trying to label cars, motorcycles, traffic lights, and zebra crossings. Everything looks fine with the annotations I see on Roboflow, and the performance of the model trained using Roboflow’s built-in model is not bad either. I have a total of 2000 images in the dataset. However, when I try to train the data using YOLOv8, something unexpected happens. The annotations seem to be scattered all over the place. I have also double-checked my annotations, and they have been standardized. I tried other projects from Roboflow University, and they worked fine. But when using my own project, there is an offset in the annotation positions. I’m not sure where the problem lies. The original image sizes were all 1920x1080, and I resized them to 960x960 on Roboflow. After saving the dataset in YOLOv8 format and downloading it, I re-uploaded it to Roboflow to check, and the annotation positions were correct. But during training with YOLOv8, there are issues.

val_batch_label


You can see that some images have correct annotation positions, while others have offset annotations.

Afterward, I conducted individual tests on my dataset. I re-annotated the images from the previous dataset, annotating approximately 35 samples. Then, I applied data augmentation to expand the dataset to 160 samples. I trained the model for 100 epochs, and the annotations’ positions were correct.Later, I expanded the dataset by annotating it to 140 images and using data augmentation to reach 430 images. However, the situation became similar to the initial one, where the annotations’ positions changed and seemed to be misplaced, which is perplexing.

Hi @廷_奕

Sorry to hear you’re having issues. I took a look at your project on Universe and it seems you don’t have the auto-orient preprocessing step in. Could you try generating a version with auto-orient for a preprocess step?

@leo Of course, I generated v13, which includes auto-orient. However, when I tried using this version again to input into YOLOv8, there was still no improvement.

Hi @廷_奕

Could you also turn on resizing? I’ve been looking into your issue, but heard that may a temporary solve to your issue.

@leo No problem, v16 has both ‘auto-orient’ and ‘resize’ enabled simultaneously, but the result is still the same…

Hi @廷_奕

I’ll need to look into your issue a little deeper. I’ll let you know when I have an update.

Hi @廷_奕

I’m still working on your issue. Could you test using the YOLOv5 PyTorch export option to train your model and see if the errors still persist?

@廷_奕 Hello I’m facing the same issue with my annotations. It’s all over the place. Although it has detected correct number of labels in image but placement of label are not right. I have cleared the label.cache multiple times also the label polygon range is [0 1] and upto 4 decimal.
But still same issue.
Are you able to resolve this issue?