Mismatch between model metrics and visualization

Hi guys,

I trained an object detection model on 25-50 orthomosaic tiles, tried out various preprocessing and augmentation options and finally realised that though the models metrics were higly improving (from mAP50 = ~10% to ~80%) the visualization even worsened.

While during the first tries (upper picture, using lots of preprocessing and augmentation) model metrics suggest poor performence and visualization shows at least some boxes, in later but simpler versions (lower picture, using only selected preprocessing, no gray scale, no augmentation, public default checkpoints) model metrics suggest quite good performance but in the visualisation show no boxes at all. Just when you push the threshold down to 1% boxes appear, but not on the objects I want to detect, but mainly on artefact blurs in the image. (I want to detect the green blops in the treetops. Actually they are quite good to spot).

So I am wandering why?

I wanted to set up a base model with a few images, check that it works and than feed it with more images. Now I am wandering, did I something fundamentally wrong? (Maybe resize images from 2048 to 560? Should I genereally use smaller tiles? Is it just to few images?)

I would appreciate any help or suggestions on how to succeed.

Thanks a lot!

  • Project Type: Object detection
  • Operating System & Browser: Windows, Firefox
  • Project Universe Link or Workspace/Project ID: mistletoe_50m

If there is someone encounting the same problem- I solved it with a “new start from zero” just setting up the project newly with the same images and labels and only two preprocessing steps.
I had tried out many options through different versions and always selected “use public checkpoint” for model training, supposing that this would be always the same. Seems that earlier versions did influence the model significantly.

Hope that helps.

1 Like

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.