How does Roboflow resize (stretch) its images?

I’ve created a dataset using Roboflow and trained a YOLO model with 640x640 images. My original images were transformed using Roboflow’s “Resize: Stretch to 640x640” step. Now I need to implement the same transform prior to feeding images into my trained model. Since I’m doing inference outside of Roboflow, I want to do the same “resize” step in my local code, but I can’t seem to figure out what resizing technique roboflow is using. I’ve tried various interpolation methods from Pillow and CV2, but none of them produce the quality of results that Roboflow does.

What resizing technique is Roboflow using for the resize step?

Thank you!

I think I’ve found it!

resized_image = cv2.resize(image, (640, 640), interpolation=cv2.INTER_AREA)

I missed that one in my first experiments, but found it at What is Image Resizing? A Computer Vision Guide..

For anyone else - this method works well when downsizing, especially with my text-heavy images.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.