Ground Truth and Model Predictions so close but missed prediction

Hello,
I’m trying to understand why my confusion matrix shows a lot of “missed predictions.”

However, when I compare the “Ground Truth” and the “Model Predictions,” the boxes appear to be in the same positions.

I’ve noticed several such cases. Why is this happening, and what can I do to fix it?

  • **Project Type: Object detection Yolov11 accurate **
  • Operating System & Browser: Windows , Firefox
  • **Project Universe Link or Workspace/Project ID: Sign in to Roboflow **

Hey, just to let you know, I understand the issue, the prediction is correct, but it probably has low confidence and was filtered by model evaluation.

I’ll improve how we show it.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.