Problems with Training in Colab vs. Roboflow

Why is the model I trained on Roboflow performing significantly better than the one I trained on Colab? For example, during object recognition, the Roboflow model can provide finer-grained distinctions, whereas the Colab-trained model tends to produce large, encompassing bounding boxes.

Which model are you training?

We perform a bunch of optimizations in our backend to optimize model performance when trained on the platform including smart hyperparameter selection and a post-training regime. This applies to all of the models we train in the platform.

For RF-DETR specifically, we have a platform-exclusive pre-training checkpoint we created to generalize better for a wider variety of datasets.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.