Applying isolate objects to preprocessing on training data

Hi

I have images 1024x1024 with x number of small objects (100s). I wanted to test how applying isolate objects at preprocessing would effect the training results and saw an mAP=96.2%, precision=96.3%, and recall=93.5%. While inferencing, the images in the test folder were preprocessed too so each object was an image, and the predictions were accurate. However I would like to apply the model to the whole 1024 x 1024 image to recognise all objects/image but I do not know how to do it. Is there a tutorial somewhere?

@Marlene the process you are running will not be the most effective for full-image inference.

The isolated objects are localized image crops to only the bounded areas for the objects of interest. That feature is meant for dataset conversion to classification, if you wanted to run two-pass detection for example —> i.e: Object Detection inference —> save image crop —> send image crop to Classification model for classification

^ that is what you would do. However, you would instead train your Object Detection model without Isolate Objects. Your Classification model will be trained utilizing the raw images exported with the Isolate Objects dataset generated from your bounding-box object detection project.

1 Like

If you’d like to continue with full-image inference with your model, simply run inference against your original image files, resized to 1024x1024.

If you trained with Roboflow Train, here’s how to run inference and use the Python Package: Python Package - Roboflow

1 Like

Hi @Mohamed

I am generating a new version now, I followed the blog on small object detection and applied the following:
Image preprocessing -
Auto-Orient
Resize : 1024x1024(although not sure about this)
Auto-Adjust Contrast: Contrast Strectching
Tile : 8x8
Augmentations-
Flip
Rotate
Mosaic

I am going to start training once the dataset is generated, I am debating using scaled-yolov4 or yolov5… do you have any advice on which might be more appropriate?

I caution against 8x8 tiling for 1024x1024 images unless the objects of interest are very small (less than 16-24 pixels in height and width).

Tiling 8x8 and Resize of 1024x1024 would give you 64 image patches for each image, all individually resized to 1024x1024.

  • That means you have 64 128x128-pixel images that are each resized to 1024x1024.
  • the pixels would be blown up a lot

It may be better to instead Tile to 2x2, for example. This makes each image into 4 patches - so 4 512x512 patches, that each get resized.

  • if training Scaled YOLOv4: Resize to 416x416 (in the notebook, the -img 416 flag refers to the square input image size of 416x416)
  • if training YOLOv5: Resize to 640x640

You’re able to train either model, or both. Both notebooks, and more, are all available here:

Test your trained models to benchmark performance and select the one you like best to continue your process.

This should help with framing how you set Tiling and Resizing:

^ with the caution for 8x8 in mind, it may still be a worthwhile experiment. Try 8x8, 5x5 and 2x2.

Please do let us know how it performs, as I’m interested to see what works best in your case.

1 Like

My objects are very small… checked 3 just to make sure and the sizes are (24x12,15x14,15x11px), so I may continue with 8x8 for now and see how it goes… if I run the training on roboflow, which model would it use … as I would like to test Scaled yolov4 and yolov5 locally but also curious to see the metrics roboflow produces…

The Roboflow Train model, will be useful for benchmarking against or making comparisons to see how you can begin setting up your custom training. It can help you approximate how many epochs to train for, for example.

While you won’t have our exact hyperparameter settings, this will still save you time in the end. Access to the Deploy Tab is also very useful for benchmark tests.

Here is how to read or analyze Roboflow Train graphs:

And sounds good, let’s go with 8x8 then and see what you get for results.

1 Like

HI Mohamed,

I used Roboflow to train the dataset and got the follwing result epochs~300, mAP= 71.5%,Precision=
82.0%,recall=67.2%. I tried the model on Roboflow’s app using a full image, but no objects are detected… what am I missing?

Try a lower confidence level. Maybe 10%, or 5% and see if anything shows up.

Active Learning will be helpful, too:

Otherwise, I’d also say to check the Ground Truth vs. Model Predictions in Visualize to see the performance for the Validation and Test sets:

image

Hi Mohamed,

2 predictions show up on the 1024 x 1024 image. Now if I use a tile, and reduce the confidence to 2% the objects are detected, but my goal is to detect the objects in the full image… I read Launch: Edge Tiling During Inference… and this is closer to what I would like to achieve, but an unsure how to… is there a notebook that is available? Sorry for the long thread.