I’ve been using an image slicer before passing images into a small object detection model. However, even when I set the overlap to 0, the slices still overlap, which causes more objects to be detected than actually exist.
Is there a way to fix this? For example, instead of specifying exact slice dimensions (which doesn’t work well with my variable image sizes), could I split the image into fixed grids like 2x2, 4x4, or 8x8? I don’t see an option to divide the image this way—would it be possible to implement something like that?
Another possibility, if you run it in the Workflow platform, you might be able to use a “Detections Merge” block to eliminate duplicates. I’ve not tried this yet but it seems plausible.