Deploy a Custom Model to OAK-D lite using roboflowoak

Im trying to Deploy a Custom Model to my OAK-D camera on a windows10 Computer.

I followed steps as per link Introducing the Roboflow OAK pip package

setting my Conda enviroment and running the Python Script for Running Inference with my Model.

Once I executed it I have following error:

Traceback (most recent call last):
File “my file route”, line 16, in
result, frame, raw_frame, depth = rf.detect(visualize=True)
TypeError: detect() got an unexpected keyword argument ‘visualize’

Could you help me to unblock the situation?

Hi @Cecilio - sorry for the error. Thanks for catching that.

The code has been updated, and I’m updating the blog post now.

The visualize keyword argument was removed: Luxonis OAK (On Device) - Roboflow

New sample code (visible from the above link, too):

from roboflowoak import RoboflowOak
import cv2
import time
import numpy as np

if __name__ == '__main__':
    # instantiating an object (rf) with the RoboflowOak module
    # API Key: https://docs.roboflow.com/rest-api#obtaining-your-api-key
    rf = RoboflowOak(model="YOUR-MODEL-ID", confidence=0.05, overlap=0.5,
    version="YOUR-MODEL-VERSION-#", api_key="YOUR-PRIVATE_API_KEY", rgb=True,
    depth=True, device=None, blocking=True)
    # Running our model and displaying the video output with detections
    while True:
        t0 = time.time()
        # The rf.detect() function runs the model inference
        result, frame, raw_frame, depth = rf.detect()
        predictions = result["predictions"]
        #{
        #    predictions:
        #    [ {
        #        x: (middle),
        #        y:(middle),
        #        width:
        #        height:
        #        depth: ###->
        #        confidence:
        #        class:
        #        mask: {
        #    ]
        #}
        #frame - frame after preprocs, with predictions
        #raw_frame - original frame from your OAK
        #depth - depth map for raw_frame, center-rectified to the center camera
        
        # timing: for benchmarking purposes
        t = time.time()-t0
        print("FPS ", 1/t)
        print("PREDICTIONS ", [p.json() for p in predictions])

        # setting parameters for depth calculation
        max_depth = np.amax(depth)
        cv2.imshow("depth", depth/max_depth)
        # displaying the video feed as successive frames
        cv2.imshow("frame", frame)
    
        # how to close the OAK inference window / stop inference: CTRL+q or CTRL+c
        if cv2.waitKey(1) == ord('q'):
            break

Hi @Mohamed

Thanks for your quick reply.

Unfortunately I copied your new code and finding new errors:
Traceback (most recent call last):
File “main.py”, line 17, in
result, frame, raw_frame, depth = rf.detect()
File “…\lib\site-packages\roboflowoak_init_.py”, line 63, in detect
ret = self.dai_pipe.get()
File “…\lib\site-packages\roboflowoak\pipe.py”, line 188, in get
detections = self.detection_depth(detections, depth)
File “…\lib\site-packages\roboflowoak\pipe.py”, line 145, in detection_depth
depth_map = self.disparity_to_depth(depth)
File “…\lib\site-packages\roboflowoak\pipe.py”, line 134, in disparity_to_depth
if disparity == 0:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

Can you please suggest anything? Thanks in advance

@Cecilio what project type are you attempting to deploy to the OAK? Can you confirm it is object detection?

And have you updated to the latest version of the package?

  • pip install -U roboflowoak

Additionally, can you provide a copy of the code you’re running here (with the API key removed, and you can omit the model id too if you’d like to leave it off the forum)?

hi @Mohamed

Yes, it is object detection project and I have installed the lastest roboflowak package.
Please see below the code:
from roboflowoak import RoboflowOak
import cv2
import time
import numpy as np

if name == ‘main’:
# instantiating an object (rf) with the RoboflowOak module
# API Key: REST API - Roboflow
rf = RoboflowOak(model=“pears-tvvfl”, confidence=0.05, overlap=0.5,
version=“1”, api_key=“”, rgb=True,
depth=True, device=None, blocking=True)
# Running our model and displaying the video output with detections
while True:
t0 = time.time()
# The rf.detect() function runs the model inference
result, frame, raw_frame, depth = rf.detect()
predictions = result[“predictions”]
# {
# predictions:
# [ {
# x: (middle),
# y:(middle),
# width:
# height:
# depth: ###->
# confidence:
# class:
# mask: {
# ]
# }
# frame - frame after preprocs, with predictions
# raw_frame - original frame from your OAK
# depth - depth map for raw_frame, center-rectified to the center camera

    # timing: for benchmarking purposes
    t = time.time() - t0
    print("FPS ", 1 / t)
    print("PREDICTIONS ", [p.json() for p in predictions])

    # setting parameters for depth calculation
    max_depth = np.amax(depth)
    cv2.imshow("depth", depth / max_depth)
    # displaying the video feed as successive frames
    cv2.imshow("frame", frame)

    # how to close the OAK inference window / stop inference: CTRL+q or CTRL+c
    if cv2.waitKey(1) == ord('q'):
        break

Hi @Mohamed ,

I kept trying but having the same error. Could it be possible that the new code introduced a bug as I am not able to execute the script? Thanks in advance

Same problem here !
I’m also trying to deploy an object detection model to an OAK PoE and I have the same error message after removing the visualize argument

1 Like

I am running into the exact same issue. An object detection model using an Oak-D-Lite though.

1 Like

For the OAK PoE device, can you try instantiating the “rf” object with depth=False? You are using the SDK with depth enabled on devices that don’t support depth detection.

Additionally, for the OAK-D devices, and general OAK deployment, we have since updated the documentation. You will want to install the package with pip install roboflowoak==0.0.5 for those that have not yet installed the package. For those that have, use pip install -lv roboflowoak==0.0.5

1 Like

Thanks Mohamed! Changing the roboflowoak package to 0.0.5 fixed my issue.

2 Likes

Thanks @Mohamed! Also tested from my side, working for my OAK-D lite device.

1 Like

Thanks Mohamed, my problem is also solved.
I did :

  • install of package 0.0.5
  • Set depth to false
  • I also had to comment following lines :
    #max_depth = np.amax(depth)
    #cv2.imshow(“depth”, depth/max_depth)
1 Like

Hi everyone, here is the updated process: OakD Deployment Failure - #3 by Mohamed

from roboflowoak import RoboflowOak
import cv2
import time
import numpy as np

if __name__ == '__main__':
    # instantiating an object (rf) with the RoboflowOak module
    # API Key: https://docs.roboflow.com/rest-api#obtaining-your-api-key
    rf = RoboflowOak(model="face-detection-mik1i", confidence=0.05, overlap=0.5,
    version="15", api_key="XXXXXXXX", rgb=True,
    depth=True, device=None, blocking=True)
    # Running our model and displaying the video output with detections
    while True:
        t0 = time.time()
        # The rf.detect() function runs the model inference
        result, frame, raw_frame, depth = rf.detect()
        predictions = result["predictions"]
        #{
        #    predictions:
        #    [ {
        #        x: (middle),
        #        y:(middle),
        #        width:
        #        height:
        #        depth: ###->
        #        confidence:
        #        class:
        #        mask: {
        #    ]
        #}
        #frame - frame after preprocs, with predictions
        #raw_frame - original frame from your OAK
        #depth - depth map for raw_frame, center-rectified to the center camera
        
        # timing: for benchmarking purposes
        t = time.time()-t0
        print("FPS ", 1/t)
        print("PREDICTIONS ", [p.json() for p in predictions])

        # setting parameters for depth calculation
        # comment out the following 2 lines out if you're using an OAK without Depth
        max_depth = np.amax(depth)
        cv2.imshow("depth", depth/max_depth)
        # displaying the video feed as successive frames
        cv2.imshow("frame", frame)
    
        # how to close the OAK inference window / stop inference: CTRL+q or CTRL+c
        if cv2.waitKey(1) == ord('q'):
            break

For those having issues with installing depthai with pip on M1 Macs:

Specific comment that outlines the install process: Installation issue when using M1 Chip · Issue #299 · luxonis/depthai · GitHub

  • Note that the “3 lines” referenced for deletion no longer exist in install_requirements.py (I just ran the process today), but the “48 lines” to delete do still exist. After deleting the 48 lines, be sure to “Write Out” (CTRL + O) and then save the changes.