How to see video from livestream on self-deployment RPI5

Hello,

I succeeded in deploying workflow on my raspberry pi as self-hosted server. I have connected USB camera to my Rasspberry Pi and it is running locally. I see information about detection in my terminal but I can not open livestream video on which there will be visible the zone,bounding boxes etc. How to do that?

I have tried to use code from other tutorials but I have failed.
For test I have used this tutorial and workflow: LINK

This is my standard code from roboflow:

from inference import InferencePipeline
import cv2

def my_sink(result, video_frame):
    if result.get("output_image"): # Display an image from the workflow response
        cv2.imshow("Workflow Image", result["output_image"].numpy_image)
        cv2.waitKey(1)
    # Do something with the predictions of each frame
    print(result)


# 2. Initialize a pipeline object
pipeline = InferencePipeline.init_with_workflow(
    api_key="***",
    workspace_name="warehouse-test-mauta",
    workflow_id="red-zone-monitoring",
    video_reference=0, # Path to video, device id (int, usually 0 for built in webcams), or RTSP stream url
    max_fps=30,
    on_prediction=my_sink
)

# 3. Start the pipeline and wait for it to finish
pipeline.start()
pipeline.join()

It’s great to see you getting something deployed to an edge device like the Pi! Since I can’t really troubleshoot the hardware side to see what you have set up to display the output I’ll skip that piece. But I do see that you might need to change the target in your my_sink from “output_image” to “label_visualization”. There’s a note in the link you posted that mentions this:

Ultimately, you need to refer to whatever the workflow is generating. So when you look at the output of your workflow, you should see the name of the variable to use. For example, in this unusual case I have to actually use “label_visualization_1” due to the way I built the workflow. Hope some of this helps!