I have labeled 1000 images in Roboflow and when i look at them in Roboflow everything is okay, but when i train the model i got some weird detections and i started to investigate that why is the model constantly detecting these certain scenes completely wrong.
Now i have traced the problem to the ground-truth / label txt files. Some of them are having extra coordinates for objects and i suspect that these make the trained model to detect weirdly.
Example of the label txt file that is broken: (0 is the class). There should be 3 objects and the coordinates for these.
Based on the information provided in the extracts, it seems like you might be encountering a few issues that have been discussed in the Roboflow community.
Mislabelled Images: As discussed in the Roboflow blog, mislabelled images can cause issues with your model’s performance. Roboflow provides tools to identify and correct these issues. You can inspect each annotation in Roboflow and make changes to them if you find an image that has been mislabelled in your dataset.
Corrupt Image/Label: As seen in the Roboflow forum post, there can be issues with non-normalized or out of bounds coordinates. This can cause warnings and may affect the performance of your model.
Improper Annotations: As mentioned in the Roboflow blog, annotations can be prone to error. They can be illogical (e.g. a negative width), completely out of frame, or partially out of frame. Roboflow performs automated bounding box quality checks and can drop problematic annotations to automatically fix the issue.
If you’re still encountering issues, it might be helpful to reach out to the Roboflow community or support for more specific guidance.
Hi! I’m Roboflow’s AI powered bot. This is an automated response powered using AI with knowledge from our docs, blog, and forum. If this solves your issue, please mark it as the solution using the checkbox icon. If this doesn’t solve your issue, please follow up with what you’ve tried/why, and the community will continue to chime in as usual.