Deploy Model on Raspberry Pi

Hi there, I am trying to deploy a model from Roboflow on my Raspberry Pi which will connect to my camera(using RTSP protocol) and make real-time predictions.
NOTE: Raspberry Pi is running on Ubuntu Image.
Can you please let me know how to do that?

I have tried the below code on my windows 11 machine but it isn’t working on my Raspberry Pi:

# import the InferencePipeline interface
from inference import InferencePipeline
# import a built in sink called render_boxes (sinks are the logic that happens after inference)
from import render_boxes

# create an inference pipeline object
pipeline = InferencePipeline.init(
    model_id="cow-lie-stand-walk/2", # set the model id to a yolov8x model with in put size 1280
    video_reference="rtsp://", # set the video reference (source of video), it can be a link/path to a video file, an RTSP stream url, or an integer representing a device id (usually 0 for built in webcams)
    on_prediction=render_boxes, # tell the pipeline object what to do with each set of inference by passing a function
    api_key=<API-KEY> # provide your roboflow api key for loading models from the roboflow api
# start the pipeline
# wait for the pipeline to finish

When I run above on Pi, it gives following error:

[02/10/24 18:19:51] ERROR    Could not sent prediction with frame_id=1 to sink due to error: OpenCV(4.9.0) /io/opencv/modules/highgui/src/window.cpp:1272:                             error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you                                          are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvShowImage'       

Based on the information provided in the extracts, it seems like you’re trying to deploy a model on a Raspberry Pi using Roboflow. Here are the steps you can follow:

  1. Verify your Raspberry Pi’s system: You need a Raspberry Pi 4 (or Raspberry Pi 400) running the 64bit version of Ubuntu. To verify that you’re running a compatible system, type arch into your Raspberry Pi’s command line and verify that it outputs aarch64 source.

  2. Install Docker on your Raspberry Pi: Open the terminal on the Raspberry Pi and install Docker using the convenience script:

    curl -fsSL -o 
    sudo sh


  3. Install the Roboflow Inference Server: The inference API is available as a Docker container optimized and configured for the Raspberry Pi. You can install and run the inference server using the following command:

    sudo docker run -it --rm -p 9001:9001 roboflow/roboflow-inference-server-arm-cpu


  4. Install the Roboflow pip Package: Next, install the Roboflow python package with pip install roboflow source.

  5. Run Inference: With the Python SDK installed, you can run inference on your model in a few lines of Python code. Substitute the model name, version, and API key with the values associated with your Roboflow account and project, then run the script source.

Regarding the error you’re facing, it seems like an issue with OpenCV not being built with the necessary GUI features. You might need to rebuild OpenCV with the appropriate flags enabled or install the necessary dependencies. However, the provided extracts do not contain specific information on how to resolve this issue. You might want to refer to OpenCV’s official documentation or community forums for more help on this.

Hi! I’m Roboflow’s AI powered bot. This is an automated response powered using AI with knowledge from our docs, blog, and forum. If this solves your issue, please mark it as the solution using the checkbox icon. If this doesn’t solve your issue, please follow up with what you’ve tried/why, and the community will continue to chime in as usual.

your code help me do inference on images only, but I want to stream it on RTSP.

Hi @lambda

Have you taken a look at the blog post on performing inference on RTSP streams? How to Run Computer Vision Models on RTSP Streams

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.