- Hello, I’ve encountered a problem. I am a deep learning beginner, and I am using Roboflow for data annotation. I am trying to label cars, motorcycles, traffic lights, and zebra crossings. Everything looks fine with the annotations I see on Roboflow, and the performance of the model trained using Roboflow’s built-in model is not bad either. I have a total of 2000 images in the dataset. However, when I try to train the data using YOLOv8, something unexpected happens. The annotations seem to be scattered all over the place. I have also double-checked my annotations, and they have been standardized. I tried other projects from Roboflow University, and they worked fine. But when using my own project, there is an offset in the annotation positions. I’m not sure where the problem lies. The original image sizes were all 1920x1080, and I resized them to 960x960 on Roboflow. After saving the dataset in YOLOv8 format and downloading it, I re-uploaded it to Roboflow to check, and the annotation positions were correct. But during training with YOLOv8, there are issues.
val_batch_label
You can see that some images have correct annotation positions, while others have offset annotations.
Afterward, I conducted individual tests on my dataset. I re-annotated the images from the previous dataset, annotating approximately 35 samples. Then, I applied data augmentation to expand the dataset to 160 samples. I trained the model for 100 epochs, and the annotations’ positions were correct.Later, I expanded the dataset by annotating it to 140 images and using data augmentation to reach 430 images. However, the situation became similar to the initial one, where the annotations’ positions changed and seemed to be misplaced, which is perplexing.
- Project Type: ex: Object detection,
- Operating System & Browser: colab
- Project Link: If applicable, share your public Universe link, or your workspace/project id if you have multiple projects