Difference Roboflow Model and Yolo8

I did a training session (v6) with Roboflow and the result was MAP50 0.82

I trained a new dataset with Yolo8 and then played it back in v7.

Now when testing images, the bounding boxes are displayed much too small and the recognition result is also much worse

Result Roboflow:

Result Yolo8:

The question is, why could this be?
What parameters is used for Roboflow training?

The difference in results between your Roboflow model and YOLOv8 model could be due to several factors. Here are a few possibilities:

  1. Training Parameters: The training parameters used in Roboflow and YOLOv8 might be different. For instance, in Roboflow, the model was trained using the YOLOv8 architecture with a mean Average Precision (mAP) of 76% on the validation set source. If the parameters used in YOLOv8 are different, it could affect the model’s performance.

  2. Dataset Quality and Size: The quality and size of the dataset used for training can significantly impact the model’s performance. If the dataset used in YOLOv8 is different from the one used in Roboflow, it could lead to different results source.

  3. Preprocessing Steps: The preprocessing steps applied to the images before training can also affect the model’s performance. For instance, in Roboflow, images are resized to 416x416x3 source. If different preprocessing steps were applied in YOLOv8, it could lead to different results.

  4. Model Architecture: The architecture of the model used for training can also impact the model’s performance. If the architecture used in YOLOv8 is different from the one used in Roboflow, it could lead to different results.

To get a better understanding of the training process in Roboflow, you can refer to the Roboflow documentation. It provides detailed information about the training graphs, insights, and metrics available for all Roboflow Train jobs.

If you’re still facing issues, I would recommend reaching out to Roboflow’s support for further assistance.

Hi! I’m Roboflow’s AI powered bot. This is an automated response powered using AI with knowledge from our docs, blog, and forum. If this solves your issue, please mark it as the solution using the checkbox icon. If this doesn’t solve your issue, please follow up with what you’ve tried/why, and the community will continue to chime in as usual.

OK, I thought that there could be different training parameters. I set the image size to 224*224 for both training sessions.

But why are the boxes with the result displayed much too small?

Here’s another example, with a well-known picture…

With a map of 87.3%, something can’t be right here

Hi @staebchen0

Could you help me understand the issue you’re experiencing so that I can help you better?

Is the problem that the image displayed in the inference demo result is too small? If so, that’s because the image that you put in to infer was small. If you want more customizable annotated image outputs, I suggest looking into the example web app available in the More Deployment Resources section which uses our hosted inference API.

Hi stellasphere,

Here’s what I was talking about:

I trained a model with Roboflow. When testing with images, the bounding boxes are displayed correctly, see picture 1 above.

Since you can only train a limited number of models in Roboflow, I trained the same dataset with a Colab notebook and created a Yolo8 model. I then transferred this model to Roboflow. While testing with images, I noticed that the frames are not displayed correctly See image 2.

The test is the same dataset, with the same parameters.

I wanted to understand what the difference comes from?

Hi @staebchen0

There are always some differences that occur when training in different environments, but some things to check when you are training in a Colab notebook are the epochs and the image size. Roboflow automatically stops training at the epoch where the model stops seeing improvement, but the training in the Colab notebook will stop at the specified number of epochs.

The preprocessing step in Roboflow changes the size of the image in the dataset, but the image size configuration here will readjust that image to the configured size. Epochs are the “number of times to run through your training data”. You may need to adjust each of these configurations in order to get a better performing model.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.