Fix errors in detection (seen in Visualize) - for model feedback

Hello.

I have a well-trained roboflow model. invariably detects desired objects with >95% confidence. Once in a while, the bounding boxes extend beyond the object of interest. i use an OCR tool to extract what’s in the bounding box, and therefore, face issues when this happens.

I came back to “Visualize” on Roboflow to confirm that this happens.

When I see that the detection includes other nearby objects, can i manually re-annotate this erroneous detection in any way (so that the model gets the feedback?)

Thanks all. This is indeed a wonderful product!

Hi @Rajesh_Panchanathan - this is a great question!

Our workflows product can help a lot here.

  1. You can set up selective filter rules in your deployment pipeline to re-upload troublesome images to your project, where you can label them and re-train.
  2. If you have systemic bias (bounding boxes are always too wide), you can use functionality like prediction resizing + crop to shrink the boxes.