Hi Roboflow Support Team,
I’m on the Core Plan and having issues with local inference for my fine-tuned RF-DETR model. The cloud API works perfectly, but local inference finds 0 detections on the same images.
Project Details:
-
Workspace: woodvision
-
Project: rv-c2vdq
-
Model: Version 10 (RF-DETR Medium)
-
API Key: UyXXXXXXXXXXXXXXX
-
Plan: Core ($79/mo)
What Works:
-
Using Roboflow cloud API with the
roboflowPython library - detections work perfectly -
Processing time: ~1500ms per image
What Doesn’t Work:
-
Using local inference with
inference-gpulibrary -
Same model, same images = 0 detections found
-
Code:
model = get_model("rv-c2vdq/10", api_key=API_KEY)thenmodel.infer(image_path, confidence=40)
Environment:
-
Python 3.11
-
inference-gpu installed
-
Windows with NVIDIA Quadro P520 GPU
-
The model loads successfully (no errors), but predictions list is always empty
Use Case: I’m building a production lumber defect detection system that requires <500ms inference time, which is why I need local inference instead of cloud API.
Questions:
-
Is there a deployment step I’m missing to enable local inference for RF-DETR models?
-
Does my Core Plan actually support local inference for fine-tuned RF-DETR models?
-
Are there any special configuration steps needed to make local inference work properly?
The console shows the model loads successfully but then finds 0 detections on images that clearly have defects (verified via cloud API).
Can you help me troubleshoot why local inference isn’t working?
Thanks, Erik