Bounding boxes are broken after applying Tile as a preprocessing step

#Object detection #2-category #YOLOv5 #Tile

Hello everyone,
I have a big image and need to tile it for small patches. After applying the tile preprocessing introduced by the amazing roboflow, I found some bounding boxes in the image patches were divided into two parts. That means, in these bounding boxes, the object I labeled are not complete. I don’t know how much these would affect the model accuracy. Do you think it is a issue? If yes, some suggestions on fixing this? thanks

have a good day.

Hi @Newgate, the images would be tiled at inference regardless, as a note (if you’re using tiling in preprocessing of the images for training, you’ll want to use tiling during inference:

So long as you’re fully labeling each and every object of interest, consistently, you should be fine.

What tile setting are you using? 2x2? And what is the median image size in the dataset?

  • you can view Median image size with Health Check

Hi, thanks for the information. I have images cover 3 types of resolution:
60164008
5472
3648
4000*3000
I am still trying to figure out the optimal patch size for the training. Some suggestions?

By the way, I would like to know some details about the tile operation: if the images are not evenly divisible, what the logic behind the tile processing then? It automatically resized the patches? I did not see any padding pixels added. thank you so much!

Just do 640x640 for resize and Tile 4x3

You can also try 1280x1280 for resize with Tile at 4x3, just note it will take longer to train and infer this way since the images are larger

The images are split evenly by the tiles you designate, and then, if you choose Resize, they are resized accordingly

Clear, thank you so much.

About the Tile function from roboflow, I am still curious. Let us say, for example, my image size is 40003000, If I set 99 for the tile operation, it is not evenly divisible. In this case, I get 81 patches, however, the total length size of the patches (9) in a row is smaller than 4000, and total height size is smaller than 3000 as well. That means the Tile function deletes the last pixels?

I did not find the documentation of Tile function.

Thanks again

as mentioned, the operation is the same - it will just divide the image into an even 9x9, split from the original image dimensions, and then resize each tile accordingly.

it shows an image preview example here https://docs.roboflow.com/image-transformations/image-preprocessing#tiling

Hey Mohamed, I’m having a similar problem and looking at the Edge Tiling During Inference post doesn’t help much, they don’t give any specific direction or code as to how to apply the tiling for inference. I annotated and trained my data on 832*832p images but need to inference on large drone images, how do I split these large images up?

Cheers, Olivier

Hi Olivier,
did figure anything out about this? We also have high-res drone images and want to tile at inference with a custom model. Unfortunately, all the roboflow documentation only mentions that tiling at inference is possible but does not give clear instructions about how to do it.

Hey,
I did, I used a custom code using the SAHI library on github GitHub - obss/sahi: Framework agnostic sliced/tiled inference + interactive ui + error analysis plots, it was super useful and pretty easy to use. If you need any help just email me @ olivierdecitre@gmail.com I’d be more than happy to try and help out!
Good luck

Hey Olivier,
thanks a lot for the swift reply! We are also using sahi outside roboflow. Did you deploy your model together with sahi for inference on roboflow. That’s what we are struggling with in particular.

No worries, I used roboflow to label, create (with augmentations) and organise my data mainly. I then exported this training data and used google colab to train them with Yolov5, and then export the trained models as .pt files and just run a python script with SAHI on my orthomosaics to get the inference.

Hope this helps, I wrote a paper on it I can send you the methods if you want.

Hey Olivier,
thanks a lot for the details. Are you then uploading the inference to roboflow again? Am I right that if you upload inference data, it’s only for images without pre-existing annotations?

We are trying to use a model we trained outside of roboflow to add annotations to images that only have a subset of the classes already labeled. That’s why we are looking for a way to either use that model together with SAHI as a deployed model in roboflow or append/overwrite the existing annotations. Both options are giving us trouble.

I’m not sure I fully understand what you are trying to do, I found roboflow to be good for some things like sorting my data but pretty bad when it actually came down to doing inferences, so I’d do all that locally and in python scripts.

From what I understand, if you are trying to add a class of annotations on some data, I’d just use the model in SAHI and export the bounding boxes data and do the same thing with the ones in roboflow although I’ve never done it and don’t know if it’s even possible.

If you explain your issue/what you are looking at in a bit more detail maybe I can try and help. I’m sorry I’m also pretty new to this :sweat_smile:

Hey Olivier,
thanks a lot for offering help but I don’t want to waste too much of your time with Roboflow troubles if it’s something you have never done before.
We are trying to upload additional annotations to images that already have some annotations. This can be done with a deployed model in the UI, however, in that case one can’t use sahi. When uploading via the API, we can only upload annotations for images without pre-existing annotations. I will create a new issue about the topic. Nevertheless, thank you very much for your help!

Ah I see, indeed I’m sorry I don’t think I can help that much with this, good luck and let me know if you figure out a way around it!

Hi @npelzmann - happy to help! You are able to upload image annotations via the API (roboflow import).

Hi Jacob,
thanks for the help and pointing me towards the API. I originally tried to overwrite annotations via the python SDK, which did not work as stated above. I noticed that the REST API does allow overwriting of annotations. Hence, I implemented this as a feature in the python SDK and created a pull request. I hope this helps others as well.

2 Likes