Roboflow API returns wrong coordinate results

Hi,

Using the api and drawing a rectangle on the image i am getting the wrong location. Running the roboroc test app it draws bounding box in correct location.

I also verified in Photoshop. The coordinates i am getting are off. What am i missing?

1 Like

Hi
I am trying out the api to return the result. The result of the bounding box seem to be off despite when I do the roboflow save action to save the image to visualize the bounding box. The save action shows the correct position of the bounding box but not the return result. i had ensured both the below result using the same confidence and overlap value… Any advice?

predict = model.predict(“test.jpg”, confidence=40, overlap=0).json()
gives when the python return the json result:

{‘x’: 508.5, ‘y’: 212.5, ‘width’: 263.0, ‘height’: 225.0, ‘confidence’: 0.9291292428970337, ‘class’: ‘car’, ‘image_path’: ‘test.jpg’, ‘prediction_type’: ‘ObjectDetectionModel’}
{‘x’: 233.0, ‘y’: 164.0, ‘width’: 40.0, ‘height’: 34.0, ‘confidence’: 0.8354455232620239, ‘class’: ‘car’, ‘image_path’: ‘test.jpg’, ‘prediction_type’: ‘ObjectDetectionModel’}
{‘x’: 347.5, ‘y’: 173.0, ‘width’: 53.0, ‘height’: 146.0, ‘confidence’: 0.7597783803939819, ‘class’: ‘person’, ‘image_path’: ‘test.jpg’, ‘prediction_type’: ‘ObjectDetectionModel’}

whereas if i drop the image into the roboflow ui to do the detection, using confidence level of 40 and overlap of 0, on the roboflow ui , it is
{
“predictions”: [
{
“x”: 406.5,
“y”: 170,
“width”: 211,
“height”: 180,
“confidence”: 0.929,
“class”: “car”
},
{
“x”: 277.5,
“y”: 138.5,
“width”: 43,
“height”: 117,
“confidence”: 0.84,
“class”: “person”
},
{
“x”: 186.5,
“y”: 131.5,
“width”: 31,
“height”: 27,
“confidence”: 0.784,
“class”: “car”
}
]
}

Hi @lchunleo

I haven’t taken a deep look into your project or the results you sent, but one quick thing to check. Did you make sure what the Roboflow API outputs?

The Roboflow hosted API, which our Python package uses, outputs in the following format:

  • x = the horizontal center point of the detected object

  • y = the vertical center point of the detected object

  • width = the width of the bounding box

  • height = the height of the bounding box

  • class = the class label of the detected object

  • confidence = the model’s confidence that the detected object has the correct label and position coordinates

From Object Detection - Roboflow Docs

The key note is that the API outputs the center point of the inferred object, not the top left edge. If you’d like guidance on how to get the edge points, take a look at our docs.

Hi @Johan_Eriksson @lchunleo, I merged this topic since it is about the same problem.