Hello, I trained an RF-DETR model on my own dataset. I then started testing it via the “Try This Model” feature. When I tried a test image via this feature it correctly identified all the objects. I then downloaded the model locally and ran inference on the same image using model = RFDETRBaseNano(finetuned_path) and self.model.predict(image) and ran my prediction but I did not get the same results and the model missed a lot of the objects that it got when i used the try this model feature. Is there a preprocessing step or something im missing? I was very surprised to see this discrepancy.
update. I tried using the pipeline inference api and the detection matched with the “Try this Model” feature. Is the model available to download not the same weights as the one hosted? I’ve spent the last couple hours trying to debug this and I’m pretty confused.
Hi @mz0g!
Unfortunately, I’m unable to support models implemented outside of the Roboflow environment because there are several confounding variables that could be causing the issue.
I strongly suggest deploying your model with inference, it is a fantastic option that will provide the easiest and quickest option to deploy your model into production.
ok cool. i will check it out. i took a look at the documentation and im a little confused how it works with the credits. I am running inference on a video on a local gpu. What type of service under roboflow’s offerings would this be considered?
Hi @mz0g!
Great question! This is known as a “Self Hosted Deployment”, and you can find more information on its credit consumption here: Roboflow Credits: Understanding Usage Based Pricing