Seeking guidance for fine-tunning Yolo v8 OBB

Over the past two months, I’ve been working on training a YOLOv9 model to detect various elements in architectural plans. This model has performed exceptionally well.

However, since I also need to detect oriented elements, I decided to train a YOLOv8-OBB (oriented bounding box) model. Despite using the same dataset and hyperparameters, the YOLO-OBB model is showing significantly lower performance, even for non-oriented elements.

Could anyone provide guidance on why this might be happening and how to improve the performance?

Hey @Igor_Barcelos

Could you share your workspace and project ID so that I can take a look?

Oriented bounding boxes on Roboflow take polygons and fit an oriented box to fit the best, given the polygon annotation. If the annotations are not polygons ore the annotations are not easy to fit to a rotated box (or rectangle) that could create some difficulties.

I would also recommend looking at a keypoint detection project, as that can be more accurate in terms of identifying common landmarks (like things on architectural plans) and helping determine orientation.