How to imbed the video frames from the inference pipeline in the web

Since the drop you observe is between inference_pipeline without streamlit and inference_pipeline with frames transferred through streamlit it seems the problem is outside of inference scope. I’d suggest checking with chatgpt as it seems it provides quite useful hints.

I just asked chatgpt and got suggestion to replace

    frame_placeholder.image(frame_rgb)

with

    with frame_placeholder.container():
        st.image(frame_rgb, use_column_width=True)

I hope you got enough hints to put you into good start, good luck!

One observation

I am displaying the stream with both the streamlit and open cv and I am getting same fps[8-10] for both even tho I am not sending frames/encoding/decoding for open cv. The only change I made was adding the streamlit
You can see the code I am not using the frame-rgb for open cv.

       // annotated coming here after process eg labeling, drawing of polygon etc 
        frame_rgb = cv2.cvtColor(annotated_frame, cv2.COLOR_BGR2RGB)
        frame_placeholder.image(frame_rgb)

        window_info = cv2.getWindowImageRect('Resizable Window')
        width, height = window_info[2], window_info[3]

        # Check if the window is minimized or hidden
        if width <= 0 or height <= 0:
            print("Window is minimized or not properly visible, skipping frame.")
            return  # Skip this frame and continue with the next one

        try:
        # original annotated frame[not passed through streamlit]
            resized_frame = cv2.resize(annotated_frame, (width, height))
        except cv2.error as e:
            print(f"Error resizing frame: {e}")
            return  # Skip this frame and continue with the next one

        # Display the resized frame in the window
        cv2.imshow('Resizable Window', resized_frame)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            cv2.destroyAllWindows()
            raise SystemExit("Program terminated by user")

Thank you so much for your help. I will try to trouble shoot and see what can I done

Thanks for sharing the observation, I have some suspicions on the script context setup by streamlit, however when I was running from my MackBook I did not experience fps drop… I will investigate further when time allows, currently we have new features that take priority. Many thanks once again!

I don’t think it’s possible to increase the fps with the streamlit

Hi @Mubashir_Waheed , thank you for following-up!

I personally think the challenge is with async nature of streamlit server and with opencv frames producer being non-async. But I’m not streamlit guru and unfortunately I have no time to deep dive into this rabbit hole…

Streaming over network is non-trivial task, that’s one of the reasons people make available solutions like mediamtx, but it’s more involved than streamlit because you have to figure out all components of pipeline.

Anyways - thank you very much for trying inference, please keep in touch if yo have any issues with inference_pipeline!

Ok I tried the mediamtx as well but even the simple stream doesn’t work on windows machine. I am getting black screen in browser. I opened the issue on Github but no response

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.