Annotations for different zoom levels of same image

I am currently working on training an object detection model to identify greenhouses using satellite image data. For this purpose, I annotate images using Roboflow. To create a larger and more diverse dataset, I used satellite images of the same location at different zoom levels. However, during manual annotation, the same objects (since they appear in multiple zoom-level images) may end up being annotated slightly differently across these images.

My concern is whether these inconsistencies in annotations for the same objects—due to variations in zoom levels—could confuse the YOLO model during training. Could this affect the model’s ability to learn effectively, as it might encounter the same object annotated differently in different images?

Hello! It’s best to train your model on data that resembles what your production data will look like. If your production data will be at different zoom levels, then you will certainly want to add data at various zoom levels for training. If all of your production data will be the same level, you can run a test to create one dataset with proper zoom level data only and then do a version with variable zoom levels to see how the model performs.