I’m on the Basic Plan and I’m having issues running my workflow locally on a Jetson Orin Nano (JetPack 6.2.2) using a YOLOv8 detection model.
Workspace Details:
• api_key=[removed by mo d]
• workspace_name=“kawada rd”
• workflow_id=“ch eck”
What Wo rk:
The cloud API works perf ectly.
I was also able to process video streams (RTSP link from an OAK camera) using my workflow locally (Deploy → Video → Live Video → Run Locally). Both worked succes sfully:
My Windo ws 11 PC
Jetson Orin Nano running JetP ack 5.1.1
I upgraded to JetPack 6.2.2 because I want to use the Detections Combine block in my workflow.
I am able to successfully connect to my inference server using the “Choose an inference server => Other => Connect => Locally => RTSP => Runworkflow” option with Jetpack 6.2.2 in the Roboflow dashboard.
What Doe sn’t Work:
• Processing video streams (RTSP link from OAK camera) with my workflow locallyon Jetson Orin Nano running JetPack 6.2.2.
• Code used: (from Deploy → Video → Live Video→ Run Locally)
pipeline = InferencePipeline.init _with_workflow(
api_key=“[removed by mod]”,
workspace _name=“kawadard”,
wo rkflow_id=“check”,
video_reference=“rtsp://localho s t:8554/mystream” ,
After waiting for a while, the system stopped automatically without displaying anything:
[ WARN:308@339.152] global cap_ffmpeg_impl.hpp:453 _opencv_ffmpeg_interrupt_callback Stream timeout triggered after 30007.662934 ms
[ WARN:309@379.845] global cap_ffmpeg_impl.hpp:453 _opencv_ffmpeg_interrupt_callback Stream timeout triggered after 30028.735226 ms
[ WARN:309@380.083] global cap_ffmpeg_impl.hpp:453 _opencv_ffmpeg_interrupt_callback Stream timeout triggered after 30288.405730 ms
Killed
Questions:
Can you help me troubleshoot why l ocal inference isn’t wor king?
Thank you very much!
Project Type: Object detection
Operating System & Browser: Linux
Project Universe Link or Workspace/Project ID: workspace_name=“kawadard”,
workflow_id=“check”, Project ID=”eq_detection-py6oi/7”
Do you grant Roboflow Support permission to access your Workspace for troubleshooting? (Yes/No): Yes
I’ll try to repro the issue, but I suspect it’s because you’re running InferencePipeline, which will use locally installed inference package for inferencing, so it will run on cpu instead of gpu. You should runinference server in docker container (gpu-enabled) and InferenceHttpClient (.run_workflow() or .start_inference_pipeline_with_workflow() ) and point the http client to your server (localhost:9001 by default) that’s running in docker container.
I noticed you shared your API key, I removed that, but you should rotate it (remove old one and create a new one) ASAP.
Thank you for your response. Perhaps your guess about the cause of this error is correct, but I think it is more likely due to a compatibility issue. Because:
This indicates that the system is getting stuck at the display step. I run cv2.imshow() separately for testing, and it successfully displays a single image. My Jetson has OpenCV v4.10.0 installed with CUDA.
So, could you help me check where the error is really coming from?
Regarding the method you mentioned earlier (InferenceHttpClient), I’m very interested in using it. However, I’ve only been able to find examples using single images. So if you don’t mind, could you share a sample code or documentation that deploys with live video input (Example: RTSP link)?
Thank you also for reminding me about the API key issue, I really hadn’t noticed it. I’ve already created a new one, so it’s fine. I would really appreciate it if you could respond as soon as possible. Thank you very much!
hi @erik_roboflow ,
Thank you for the quick reply!
I tried running your code. At first, it threw a ‘500 server error… Internal error.’ I believe this happened because the inference‑server container couldn’t access the host’s RTSP server. So I changed the line -p 9001:9001 \ to --network host \ and ran it again. Luckily, it ran successfully:
Meanwhile, I can still do the same thing with RTSP link and OpenCV in a simple way as follows. Therefore, I don’t think this error is caused by my OpenCV package:
The code provided works on my end. I mocked rtsp stream, but it works as expected, so my assumption is your local server. Could you check container logs? Docker deskotp helps a ton here, you can just click on the container to check logs.
In your case you’d need to share both ports 9001 (so your py script can connect to docker) and 8554 (so server can access rtsp cam).
However, I don’t think the issue was only the OpenCV package.
With OpenCV 4.10.0 (CUDA) I can still use imshow() for single images or videos without any problem, so it wasn’t purely an OpenCV bug.
The failure only happened in my full pipeline (Roboflow Inference + CUDA build) when running inside my container/host environment: After I simply import the inference-sdk library, this code failed to display an image:
Update: I think I have identified the exact root cause of the issue.
After reinstalling inference v1.0.1, the system again failed to display anything. I was able to fix this by removing the bundled opencv-python dependency and downgrading NumPy to 1.26.4.
This indicates an incompatibility between the pip opencv-python wheel included with the Inference package (tested across multiple versions) and the system OpenCV with CUDA on JetPack 6.2.2.