Uploaded Yolo v8 model working incorrectly in Label Assist

I’ve created a card recognition dataset with Roboflow, exported it and used to train a YOLOv8 model, which was then uploaded back to Roboflow to use in Label Assist. The model was uploaded via Python using version.deploy(). However, the inference results are drastically different if applied to the same image locally on my machine and inside Roboflow. Here are results obtained in Label Assist:

And here are results from the same model used on my machine:

(not allowed a second picture since I’m a new user, will add to comment below)

(I have deleted some parts of the image for privacy reasons, the same full image was used for inference in both cases).

The model is close to ideal in case of local inference, and provide quite low quality predictions if used in Label Assist (model high quality is also seen in training metrics which Roboflow has imported together with the weights). I would appreciate any advice here, especially if such behaviour was detected earlier and if there is any solution to this? Thanks!

Here is local inference result:

Hi @Ivan - thanks for posting, and sorry you’re running into this issue.

Can you please add Roboflow support to your workspace? You can toggle support access via the Invite page. I’ll work to replicate and troubleshoot the issue.


Hi @Jacob_Witt , access granted to Roboflow Support help@roboflow.com

Great - looking into this now.

@Ivan - I can confirm I’m seeing the same ‘loose’ inference results from your uploaded models. However, I’m unable to debug further without the train directory within /runs/detect/train/ of your notebook.

Could you please send a zip of that directory to me? starter-plan@roboflow.com

@Jacob_Witt Thanks! I’ve just send the data, look forward to hearing from you.

Hi @Ivan - we found the issue. Our Yolo v8 notebooks support ultralytics version 8.0.134, and it appears your model was trained using 8.0.160. The latest version of roboflow should warn you on if you attempt to train on a different version, so you may want to update your roboflow version for the future.

Can you try training on 8.0.134 and letting us know if that resolves the issue?

Hi @Jacob_Witt , unfortunately,it doesn’t help - I have downgraded ultralytics, trained a model with 8.0.134 version, uploaded it without any warning messages with roboflow version 1.1.4 and still got the same ‘loose’ results (and this model works close-to-perfect on local machine as well); you can find it as dataset v28 model in my project.

I also tried updating roboflow to version 1.1.6, uploaded the same model from ultralytics 8.0.134 version without any warning messages and still got the same ‘loose’ results; you can find it as dataset v29 model in my project.

@Ivan - do you mind emailing me the v28 roboflow.zip file? Additionally, could you include the code you are using for train and the deployment so we can try to reproduce the issue?

Hi @Jacob_Witt , sorry for delay, sure, will send the data shortly