How to deploy using downloaded weights

Hello, I have been attempting to use the model weights that supposedly are for use with PyTorch. Unfortunately, as other users have noticed, this is not working: Deploy Weights Error · Issue #357 · roboflow/roboflow-python, How to deploy a custom ViT based classification model to a on premises API? - Community Help - Roboflow

I have myself tried to use these “model weights” but I found that it looks nothing like a usual saving with a state dictionary, or even a TorchScript file. It is a string with what seems to be some kind of encoding, including this in the end “src.classification.timm_model”. I have tried to load it with timm models but that has also failed. I also requested information to the support team but was answered with “Roboflow does not provide support for downloaded model weights used outside of its ecosystem” which is already stated in the documentation.

They also instructed me to use Inference, but if Inference seems to be the only functional option to deploy models trained on Roboflow, I want to finally ask what is the aim of enabling a “Download Model Weights” option. I don’t see why this even is a feature (and a paid one) if users cannot use those weights, and no assistance is provided.

If anyone has faced the same issue and has any idea how to handle it - or if I should simply give up - I’d be truly grateful.

Hi @gbam - the supported method to deploy Roboflow models is via the inference package.

We offer the ability for customers to export weights so that they aren’t locked into our platform by default, and also to support deployment targets where inference is not a good fit (particularly embedded devices which don’t support Docker).

However, due to the complexity of each individual deployment, we are unable to provide support for running models outside of our ecosystem.