Inference API Bounding Boxes Offset

Hi, I am facing an issue with the Roboflow. After training the model, i have downloaded the code and predicted on new image in Jupyter notebook using Opencv, the bounding box is showing at an offset. The coordinates it is giving are at a little offset. For every image it is giving an offset.

I have a long image, i cropped the image into equal parts. I am annotating on the equally cropped images and trained the model. When i am giving the same cropped image it is showing the bounding boxes at wrong coordinates.

I am inferring images with hosted API, project name and version. One thing I am training and predicting on the image that have same resolution but still facing an offset.

I am uploading 2 images that are when i am annotating in Roboflow and other one is when it predicted after training.

Hi @vijay

Sorry to hear you’re having issues. Roboflow’s inference server provides outputs for x and y as the center point of the detected object. See more about this on the relevant section from the docs. Could that be the source of your issues?

If you’re looking for a easy way to display your inferred predictions, I suggest using supervision’s from_roboflow() function and the plot_image() function.

@vijay did you use the static-crop augmentation? try feeding the whole image to the model, not the cropped version. it should do the cropping as a preprocessing step