How can i prevent the model from predicting the same object twice?

This category is for question related to accounts on

Please share the following so we may better assist you:

  1. Project type (Object detection)
  2. The screenshot of the error being triggered in your browser’s developer tools/console.

I’m seeing a lot of “double predictions” (some correct, some incorrect) in my test set from the trained model. Is there a way to make the model not predict the same object twice? It’s seeing the same object with a smaller bounding box and a bigger bounding box.

One thing I will try is to space out the objects more (although I don’t understand why that would matter, since the bounding boxes in the annotations are tight around the object)

those are the ground truth boxes

Hi, it isn’t an “error,” but rather from the model’s inference settings. You can update your overlap settings when running inference with the model.

For example, in the Example Web App, setting an overlap metric of 30% means that bounding boxes can only overlap up to 30% before they are “merged” and considered as a single box.


Oh ok nice. Cool that I can change it to something else when I run detection.

After playing around with it for a bit, I’m finding the behavior of this setting perplexing. Would you help me understand what is going on in these cases?

Case 1

on the left there it feels like 100% overlap since the red box completely covers the blue one.

Through trial and error I found that 11% overlap and below makes this particular case of overlap go away and 12% and higher makes it show up. Why this is considered 11% overlap?
(the only thing I could think of is that 11% is the overlap between the red large box and the second red box to it’s right)

Also when I did choose an overlap less than 11% I got:

Screen Shot 2022-05-26 at 7.46.10 AM
(in this case the boxes are nearly axis aligned, so overlap zero is also working)

This is great! but how did the prediction change as a result of setting overlap differently? It’s now predicting a different size and shape of red box (notice the larger red box above)?

Case 2

Now having learnt that 11% works for this one case, I tried a second at 11% and got this:

At 11%, there are quite a few nearly overlapping cases still present. In the top center-right looks like low overlap removed the box around the circle with a bar in it rather than remove the overlapping boxes around the star with a dot in it.

Just to play around I decided to try 0% overlap … and still got some doubled boxes:

So now I’m confused… what happened here? I thought 0% overlap would remove the doubled boxes for sure.

Is there some way to set a different overlap when visualizing the dataset in the trained output? Then I could easily scan the images and make sure that overlap of 10-11% gets me the results I’m looking for.

I also wonder how precision/recall is calculated in this case and whether I could see a different precision/recall for this different overlap setting. That might also help me understand whether the setting improves the model or not…

Thanks for all your help, I’m just getting started with CV and I’m learning a lot from your answers.

@Mohamed not sure if this is helpful… I was checking out how the yolov5 detector deals with this issue and they have a --agnostic-nms option that uses nms across classes.

Is something like that what I need here? I noticed that the overlap removes predictions of the same class and leaves predictions of multiple classes alone … unless I’m just making that up hoping for a solution here.

Yes, that looks correct! Nms is the value of interest.

There’s more on it in our Glossary from the Knowledge Base

Hey @Mohamed

Now how do I set that option with the roboflow inference tool? I don’t see an “agnostic” option anywhere there to select.

If we can get this working I can hopefully try out the model!


The current option in Roboflow is the overlap and confidence flags. I’ll forward this to my team as feature request feedback. #feedback

To have a more explicit nms-setting you’d want to train and use a custom YOLOv5 model and with the --agnostic-nms argument included when you run inference with

This guide has examples of using with arguments:

Also, “class-agnostic nms” would set the nms-values’ results in your inference to operate in such a way that your model detects a bunch of objects “without knowing” what class they belong to.

A higher confidence setting and higher IOU (overlap) setting for inference helps to accomplish the “consolidation” of overlapping bounding boxes of the same class for the same object. So you should still be able to try this out/test with your Roboflow model with higher confidence and overlap settings, and implement active learning to improve classes that don’t perform well. Start with lower confidence & overlap for an image and increase the values in increments of 3-5% with each test.