I want detailed report for each SKU to understand where the precision is dropping

I would need to get Class Name, Image File Name, Annotated SKU Name, Predicted SKU Name from validation dataset.

I would recommend a few things to get something close to what you’re looking for.

  1. From the versions page of a project, download the dataset. From there, you should get a JSON file that contains the file name, annotated class, etc.
  2. Run your model against the validation images using our inference package.
  3. Compare the results from the annotations vs the predictions.

Alternatively, an easier option is our model evaluation feature which provides better insight into the classes that are being predicted incorrectly, potentially resulting in a precision drop.

Hope that helps!