I’ve created a dataset using Roboflow and trained a YOLO model with 640x640 images. My original images were transformed using Roboflow’s “Resize: Stretch to 640x640” step. Now I need to implement the same transform prior to feeding images into my trained model. Since I’m doing inference outside of Roboflow, I want to do the same “resize” step in my local code, but I can’t seem to figure out what resizing technique roboflow is using. I’ve tried various interpolation methods from Pillow and CV2, but none of them produce the quality of results that Roboflow does.
What resizing technique is Roboflow using for the resize step?
Thank you!