I did a training session (v6) with Roboflow and the result was MAP50 0.82
I trained a new dataset with Yolo8 and then played it back in v7.
Now when testing images, the bounding boxes are displayed much too small and the recognition result is also much worse
The question is, why could this be?
What parameters is used for Roboflow training?
The difference in results between your Roboflow model and YOLOv8 model could be due to several factors. Here are a few possibilities:
Training Parameters: The training parameters used in Roboflow and YOLOv8 might be different. For instance, in Roboflow, the model was trained using the YOLOv8 architecture with a mean Average Precision (mAP) of 76% on the validation set source. If the parameters used in YOLOv8 are different, it could affect the model’s performance.
Dataset Quality and Size: The quality and size of the dataset used for training can significantly impact the model’s performance. If the dataset used in YOLOv8 is different from the one used in Roboflow, it could lead to different results source.
Preprocessing Steps: The preprocessing steps applied to the images before training can also affect the model’s performance. For instance, in Roboflow, images are resized to 416x416x3 source. If different preprocessing steps were applied in YOLOv8, it could lead to different results.
Model Architecture: The architecture of the model used for training can also impact the model’s performance. If the architecture used in YOLOv8 is different from the one used in Roboflow, it could lead to different results.
To get a better understanding of the training process in Roboflow, you can refer to the Roboflow documentation. It provides detailed information about the training graphs, insights, and metrics available for all Roboflow Train jobs.
If you’re still facing issues, I would recommend reaching out to Roboflow’s support for further assistance.
Hi! I’m Roboflow’s AI powered bot. This is an automated response powered using AI with knowledge from our docs, blog, and forum. If this solves your issue, please mark it as the solution using the checkbox icon. If this doesn’t solve your issue, please follow up with what you’ve tried/why, and the community will continue to chime in as usual.
OK, I thought that there could be different training parameters. I set the image size to 224*224 for both training sessions.
But why are the boxes with the result displayed much too small?
Here’s another example, with a well-known picture…
With a map of 87.3%, something can’t be right here
Could you help me understand the issue you’re experiencing so that I can help you better?
Is the problem that the image displayed in the inference demo result is too small? If so, that’s because the image that you put in to infer was small. If you want more customizable annotated image outputs, I suggest looking into the example web app available in the More Deployment Resources section which uses our hosted inference API.