Reducing Image Size for Faster Inference

Confirming that given a 1920x1080 (16:9) aspect ratio image, if I reduce that to 640x360 (16:9) for faster inference, the the resulting prediction positions will be mappable using an upscale ratio of 1.78:1.
I did the math and think it works out but would appreciate a double check on this one.

It actually looks like there is an easier way by multiply by whatever you divided by to get back the right results
Eg
1000x1000 → 200x100
Multiply predicted x and width by 5 and Multiply y and height by 10 to revert back to the original coordinate system or in my case multiplying x, y, width, height by 3, because 1920x1080 → 640x360 divided them by 3