Example Web App image resizing during inference? Website screenshot

So the latest model I trained does well on images with the exact same dimensions. However, I think your web app resizes images above 1500px long (it’s in your source code) so when I feed it a long screenshot of a website the resizing messes with the inference.

Is there a workaround for this?

All models I know of accept images of a fixed size (usually a square). Tiling to ~square is likely what you need.

Website screenshots are an especially perverse case where you really don’t want to scrunch them to squares, you want to deal with them one “page” at a time. A workaround would be to subdivide the images into “multiple” images for inference and inference in increments of say 416px by length, for example.

After creating the “split images”, you could add a spot in your code to save the images locally or to a server, and then can pass back the model predictions and match it to the image name. You will want to make sure your script creates unique file names for the images if you’d like to properly match the inference results to an output file or database:

Example:

  • original image title: photo_for_inference.jpeg
  • split images: 1_photo_for_inference.jpeg, 2_photo_for_inference.jpeg, 3_photo_for_inference.jpeg

This would help with matching predictions back since you can iterate on the file name with a counter object (i.e {i}_photo_for_inference.jpeg) to perform the inference with the API. Using the Roboflow Hosted Inference API + Displaying the Response Image with format=image + Drawing a Box from the Inference API JSON Output

You could also take the newly created images and select them “one by one” for inference within the Example Web App. The Example Web App accepts locally stored files for inference as well as hosted images.