Difference between API and Web UI for detection models

  • Project Type: Object Detection & Instance Segmentation
  • Operating System & Browser: MacOS, all browsers
  • Project Universe Link or Workspace/Project ID:

I’ve trained models for object detection and object segmentation for a custom application. The images I use can be anywhere from 1200 x 1200 to 5000 x 5000. They work great on the web UI when I test the model, but I get slightly worse results when calling the model on the same images through the python API, even when using the same parameters.

I’ve found this thread that mentions resizing the images to 2000 x 2000 before the API request and this thread that mentions that the web app handles the tiling before calling the model.

Is it possible to have information on the tiling done by the app? I would like to reproduce similar results with the API without having to downsize and lose details.

If you go to the web UI and open dev tools, then perform a request. You’ll find the API used. Instead of the typical route, call that with your API key, confidence, and image url. If you call that instead, do you get the results you expect?