Role of Ground Truth in calculating metrics

Recently used the free project option to run some imagery. 26 images uploaded an provided annotations and bounding boxes for them. The toolkit did training and came up with these statistics:

Your EchoTest (2023-08-11 11:22am) model has finished training in 13 minutes and achieved

43.1% mAP, 91.2% precision, and 40.0% recall.

My understanding is that these metrics are based on the model’s performance compared with Ground Truth. My project does not have ground truth. Can you shed some light on how these numbers were arrived at?

Best

Paul Petronelli

palm@palmcorp.com

Hi @Paul_Petronelli

Yes, you are correct. The metrics you mentioned are based on inference vs ground truth. Could you share your project so we can take a deeper look and get on the same page? (You can share your Universe link if you are on the public plan)

Thanks for the feedback. Will check with my board and see what can be released.

1 Like