Roboflow Train usage with Python

Hi, I trained a model with “Roboflow Train”. Then I used the referenced code (Using Your Webcam with Roboflow Models) there and changed the video source from webcam to RTSP stream. This is the console output of problem:

[rtsp @ 000001a6ebfe64c0] Received packet without a start chunk; dropping frame.
cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\highgui\src\window.cpp:967: error: (-215:Assertion failed) size.width>0 && size.height>0 in function ‘cv::imshow’

Hi @Cagan,

Can you try updating the code to look like this, since you’re using an RTSP stream? :

# all `import` at the beginning
import json
import base64
import cv2
import numpy as np
import requests
import time

# --- constants ---  

ROBOFLOW_API_KEY = 'YOUR_ROBOFLOW_PRIVATE_API_KEY'
ROBOFLOW_MODEL = 'YOUR_ROBOFLOW_MODEL_ID/ROBOFLOW_MODEL_VERSION_NUMBER'
ROBOFLOW_SIZE = 400

# --- functions ---

params = {
    "api_key": ROBOFLOW_API_KEY,
    "format": "image",
    "stroke": "5"
}

headers = {
    "Content-Type": "application/x-www-form-urlencoded"
}

url = f"https://detect.roboflow.com/{ROBOFLOW_MODEL}"


def infer(img):
    # Resize (while maintaining the aspect ratio) to improve speed and save bandwidth
    height, width, channels = img.shape
    scale = ROBOFLOW_SIZE / max(height, width)
    img = cv2.resize(img, (round(scale * width), round(scale * height)))

    # Encode image to base64 string
    retval, buffer = cv2.imencode('.jpg', img)
    img_str = base64.b64encode(buffer)

    # Get prediction from Roboflow Infer API
    response = requests.post(url, params=params, data=img_str, headers=headers, stream=True)
    data = response.raw.read()
    
    #print(response.request.url)
    
    if not response.ok:
        print('status:', response.status_code)
        print('data:', data)
        return
    
    # Parse result image
    image = np.asarray(bytearray(data), dtype="uint8")
    image = cv2.imdecode(image, cv2.IMREAD_COLOR)

    return image

# --- main ---

video = cv2.VideoCapture("rtsp://192.168.1.2:8080/out.h264")

while True:
    start = time.time()
    ret, img = video.read()
    cv.imshow('VIDEO', img)
    
    if ret:        
        image = infer(img)
        
        if image is not None:
            cv2.imshow('image', image)
        
            if cv2.waitKey(1) == ord('q'):  # `waitKey` should be after `imshow()` - to update image in window and to get key from window
                break
        
            end = time.time()
            print( 1/(end-start), "fps")  # print() automatically add space between elements - you don't need space in "fps"
        
# - end -

video.release()
cv2.destroyAllWindows()

Let me know if you still receive an error.

Hi @Mohamed,

It still gives the same errors also it debugged " status: 404
data: b’{“message”:“Not Found”}’ ". Also, the window which pops up for the camera stream doesn’t show video.

Do you also have ffmpeg installed?

OpenCV relies on ffmpeg or other video backends for handling video formats and IP camera protocols. - Source, Stack Overflow

To check if you have ffmpeg, run this:

python -c "import cv2; print(cv2.getBuildInformation())"

You may also need to first install ffmpeg, then create the python environment or virtual environment and install the opencv package. From there, try running the code.

There are notes on our video inference GitHub repo for the installation process for ffmpeg: GitHub - roboflow-ai/video-inference: Example showing how to do inference on a video file with Roboflow Infer

Yes, I have already installed ffmpeg.

On the other hand, I also can not use my dataset over VideoCapture(0). When I try it, status: 404
data: b’{“message”:“Not Found”}’ exit code disappears and python not responding.

Now, I solved the problem. I just reinstall OpenCV for once :slight_smile:

But the fps values are much lower than the “Use Your Camera” page. Why it is like that and how can I fix it?
image

The fps values are dependent on the host system you are running the model on.

Your CPU and GPU are major factors in the speed, as well as the type of stream you are using.

The baseline webam deployment option is different from an RTSP stream, and that’s likely giving you a difference in fps as well.

Actually, this low fps problem shows up on the webcam. I couldn’t test it with RTSP stream yet. Also, I have powerful CPU (i7-11800H) and GPU (RTX 3060). With this system, I can’t give the meaning of 1-2 fps values