Tensor the bounding boxes of the objects in the image, in the format of (x1, y1, x2, y2) convert translation to image coordinates

0 0.09453125 0.90234375 0.1890625 0.1953125
I have a tensor and I have problems translating to the coordinates in the picture (640,640) I translated on this site for datasets and translated for the yolov8 model
On this site, I translated the boxes into a tensor for yolov8
and I have my own output tensors and they need to be translated the other way around, that is, into coordinates

Hi @Влад_Тихонравов - if I understand correctly, you’re aiming to convert the detections from the normalized values of 0-1, to the pixel coordinates?

For YOLOv8 inference, you can receive predictions as boxes, for example, with object detection. We have an example for it in the training notebook: GitHub - roboflow/notebooks: Set of Jupyter Notebooks linked to Roboflow blog posts and used in our YouTube videos.