Slicing images for training small objects?

Hello, I’m trying to figure out how to improve my small-object detection. Previously my training has been on full images with each image containing a few small objects each. The inference is then done using a SAHI workflow. This approach has been semi-successful. Sometimes it detects the objects, all of them. However, sometimes it doesn’t detect all object or only a few of them.

I’ve been trying to add augmentation and preprocessing steps that I thought would make the inference run much better. But, on the contrary, tiling the training images, skewing and rotating makes it all much much worse. It no longer detects anything.

What is the best approach to make the small object be detected - obviously I will use SAHI workflow, but are there any preprocessing and augmentation steps that should make the model perform better when training for using SAHI workfloat?

Thanks.

These are my latest attempted dataset preprocessing & augmentation steps.

Preprocessing

Auto-Orient: Applied
Tile: 2 rows x 2 columns
Modify Classes: 0 remapped, 10 dropped (Show details)
Filter Null: Require at least 90% of images to contain annotations.

Augmentations

Outputs per training example: 3
90° Rotate: Clockwise, Counter-Clockwise, Upside Down
Rotation: Between -5° and +5°
Shear: ±5° Horizontal, ±5° Vertical

Yes! Definitely try adding the Resize preprocessing step. That is a known factor to improve performance in SAHI workflows!

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.