Detection Needed

Hello… I am now implementing a object detection project with roboflow and not all the objects are detected. I tested with different 22 objects. Sometimes it detects 15, sometimes 12 and so on. So, I want it to detect every objects and with correct labels. What should I do more?
This is how I annotated and generate the dataset.

  1. I took videos and uploaded them to the roboflow. It made the videos into the frames.
  2. And I made the annotation process. If there is three objects in the image, I annotated three objects with correct labels and so on.

You can see in the videos that I provide. In those videos, it is not detecting all the objects. The videos that I uploaded and annotated are the same with the videos that I provide. The difference is I made the live object detecting. You can see in the below videos, these are the live detection and not prerecorded videos.

I’m sorry, but as an AI, I’m unable to view or analyze the videos you’ve provided. However, based on the information you’ve given and the extracts I have, there are several factors that could be affecting the performance of your object detection model.

  1. Quality of Annotations: The accuracy of your model heavily depends on the quality of your annotations. It’s important to draw accurate and tight bounding boxes around objects of interest and assign correct labels to them. Consistency in annotation is also crucial. You can refer to the Roboflow Annotate guide for best practices.

  2. Quantity and Diversity of Data: The model’s performance can also be affected by the quantity and diversity of your training data. If your model is not detecting all objects, it might be because it hasn’t seen enough examples of those objects during training. Try to include as many different scenarios, angles, and lighting conditions as possible.

  3. Model Training: If you’re training your model using Roboflow Train, make sure you’re following the correct steps as outlined in the Roboflow documentation.

  4. Dataset Health Check: Roboflow provides a dataset health check feature that can help you catch silent errors that may reduce the performance of your computer vision models.

  5. Preprocessing and Augmentations: Applying appropriate preprocessing and augmentations to your dataset can also improve model performance. You can do this under the “Generate” tab in Roboflow as mentioned in this blog post.

If you’ve checked all these factors and are still experiencing issues, it might be helpful to reach out to the Roboflow support team for more specific guidance.


Hi! I’m Roboflow’s AI powered bot. This is an automated response powered using AI with knowledge from our docs, blog, and forum. If this solves your issue, please mark it as the solution using the checkbox icon. If this doesn’t solve your issue, please follow up with what you’ve tried/why, and the community will continue to chime in as usual.

@Jacob_Witt Dear Sir I know you are very busy. But we need your help. Please How to fix our issue. Please check our video so you can understand more. @Jacob_Witt

Are these all distinct classes? Here I think a two-stage model would be appropriate.

Stage 1 is object detection and detects all of the objects (1 class).
Stage 2 is a classification model and splits the cutouts of the objects into the correct classes.

1 Like

Many Thanks for your reply. Do you mean we need to use two model? Can you give me some reference? Because we just know how to use one model only.

Hi,
I looked at this as well and it seems as if the cans(?) are being detected pretty reliably, but only from certain angles. If you watch closely, you can see that the objects are all detected as the camera approaches and goes directly over them, but then loses track as the perspective on individual objects shifts or if they are not directly overhead. This suggests an overfitting for certain positions in your dataset or the pre-trained model that you’re using.

Do you need to detect them once or track them at all times from all angles ?
If the latter then I’d suggest augmenting your dataset with more images from the perspectives where you’re having trouble from.

For my own projects, I’ve tried to ensure that I collect imagery and label the subjects from as many angles, perspectives and lighting conditions as possible.

A quick hack might be to lower the confidence level in your detection code and see if that picks up the objects again even after the viewing angle has changed.

Hope that helps!

1 Like

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.