**Project Universe Link or Workspace/Project ID: (private)
When I detect using the trained model using the app or by calling the model API I get different results than when I deploy it locally.
Does the inference online include a preprocessing step which I need to include when inference locally?
I’m facing the same issue where predictions from my model deployed on Roboflow are noticeably different from predictions made locally on the same image. I’ve set the same parameters for IOU and confidence in both cases. Any ideas on why this might be happening?
The model deployed on Roboflow predicts 17 fish, while the local model predicts 21 fish on the same image. Moreover, the larger the number of targets in the image, the greater the discrepancy between the two predictions. Any insights into why this difference is occurring?
Since the inference environment that the hosted inference server and the notebook are in differs, some variation in the results is not uncommon. If you want to reproduce our hosted inference server more closely, check out our open-sourced Inference server.
Hi there, are you using the tiling preprocessing step?
The app auto-tiles the images before sending them to the inference server, while locally you’re sending the entire image. We’re aware of this discrepancy and are working on a fix, but for now we recommend tiling locally if you’re running into this issue.
As mentioned before, since the inference environment that the hosted inference server and the notebook are in differs, some variation in the results is not uncommon. If you want to reproduce our hosted inference server more closely, check out our open-sourced Inference server.