Can I use the Roboflow for custom detection, with an offline (no internet) environment?
My model is trained with Roboflow, yet I can’t find a way to download the .pt file.
Yes, you can use Roboflow for custom detection in an offline environment. However, you would need to download your model weights file first.
In the case of a YOLOv5 model, you can download the weights file (yolov5s.pt) from the YOLOv5 repository or directly from this link: https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5s.pt as mentioned in the YOLOv5 Object Detection with OpenVINO™ Toolkit blog post.
For other models, you would need to follow the specific instructions provided by Roboflow for that model. For example, in the case of a YOLOv8 model, you can upload your custom YOLOv8 pt file as mentioned in this forum post.
Once you have downloaded your model weights file, you can use it for inference in an offline environment.
Hi! I’m Roboflow’s AI powered bot. This is an automated response powered using AI with knowledge from our docs, blog, knowledge base, and previous forum responses. If you find this to be a helpful answer, please mark it as the solution using the checkbox icon below. If this doesn’t solve your issue, please follow up with what you’ve tried/why, and the community will continue to chime in as usual.
ok but that is just the generic weights file. how do i get my best.pt?
Simple answer is, there is no way for you to download the model. The only way you can use your model is through API calls. So, basically, you put effort and money into training and fine-tuning your model, and, in the end, Roboflow is the true owner of your work. I guess this was the catch for using a free service. Learned this the hard way
Currently offline mode is included within enterprise plan
Free and Starter Plan customers can still run models through inference, however offline mode is not included in this type of license, please see this documentation paragraph for more details
To be clear, this isn’t true!
To run it locally, use
from inference import get_model
model = get_model("your-project/your-version", api_key="your-api-key")
image = "https://website.com/my-image" # or PIL.Image or numpy array
results = model.infer(image)[0]
see more info here:
and to clarify our license:
In order to stay compliant with licensing requirements, fine-tuned model weights can only be used in an Inference deployment. For free and Starter Plan customers, your Roboflow license permits use of your models on one device. Enterprise customers can deploy models on multiple devices.
So if you use inference on a single device, you are good to go offline on any plan