OakD Deployment Failure

OS: Ubuntu 20.04
roboflowoak version: 0.0.8 (installed from Pip)
Similar Discussion:

Error Message:

(humanPose) yuxiang@yuxiang:~$ python3 /home/yuxiang/Desktop/code/depthAI/depthai-python/examples/SpatialDetection/oakNumbersInferencing.py
Traceback (most recent call last):
  File "/home/yuxiang/Desktop/code/depthAI/humanPose/lib/python3.8/site-packages/roboflowoak/pipe.py", line 44, in __init__
    self.detection_nn.setBlobPath(self.nn_path)
RuntimeError: BlobReader error: File does not seem to be a supported neural network blob

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/yuxiang/Desktop/code/depthAI/humanPose/lib/python3.8/site-packages/roboflowoak/__init__.py", line 49, in __init__
    self.dai_pipe = DepthAIPipeline(self.cache_path+"/roboflow.blob", self.size, self.resolution, self.class_names, rgb, self.colors, self.fast, self.confidence, self.overlap, self.sensor_mode, self.wide_fov, depth, device, legacy)
  File "/home/yuxiang/Desktop/code/depthAI/humanPose/lib/python3.8/site-packages/roboflowoak/pipe.py", line 46, in __init__
    raise Exception("Failure loading model...")
Exception: Failure loading model...

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/yuxiang/Desktop/code/depthAI/humanPose/lib/python3.8/site-packages/roboflowoak/__init__.py", line 60, in __init__
    self.dai_pipe = DepthAIPipeline(self.cache_path+"/roboflow.blob", self.size, self.resolution, self.class_names, rgb, self.colors, self.fast, self.confidence, self.overlap, sensor_mode, wide_fov, depth, device, legacy)
NameError: name 'sensor_mode' is not defined

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/yuxiang/Desktop/code/depthAI/depthai-python/examples/SpatialDetection/oakNumbersInferencing.py", line 8, in <module>
    rf = RoboflowOak(model="numbers-2ewg8", confidence=0.05, overlap=0.5,
  File "/home/yuxiang/Desktop/code/depthAI/humanPose/lib/python3.8/site-packages/roboflowoak/__init__.py", line 62, in __init__
    raise Exception("Failure while retrying load weights - does this model, version, api key exist? can you curl api.roboflow.com, and can your device download files from google cloud storage? have you hit your device limit?")
Exception: Failure while retrying load weights - does this model, version, api key exist? can you curl api.roboflow.com, and can your device download files from google cloud storage? have you hit your device limit?

code:

from roboflowoak import RoboflowOak
import cv2
import time
import numpy as np

if __name__ == '__main__':
    # instantiating an object (rf) with the RoboflowOak module
    rf = RoboflowOak(model="numbers-2ewg8", confidence=0.05, overlap=0.5,
    version="4", api_key="XXXXXXXXX", rgb=True,
    depth=True, device=None, blocking=True)
    # Running our model and displaying the video output with detections
    while True:
        t0 = time.time()
        # The rf.detect() function runs the model inference
        result, frame, raw_frame, depth = rf.detect(visualize=True)
        predictions = result["predictions"]
        #{
        #    predictions:
        #    [ {
        #        x: (middle),
        #        y:(middle),
        #        width:
        #        height:
        #        depth: ###->
        #        confidence:
        #        class:
        #        mask: {
        #    ]
        #}
        #frame - frame after preprocs, with predictions
        #raw_frame - original frame from your OAK
        #depth - depth map for raw_frame, center-rectified to the center camera

        # timing: for benchmarking purposes
        t = time.time()-t0
        print("INFERENCE TIME IN MS ", 1/t)
        print("PREDICTIONS ", [p.json() for p in predictions])

        # setting parameters for depth calculation
        max_depth = np.amax(depth)
        cv2.imshow("depth", depth/max_depth)
        # displaying the video feed as successive frames
        cv2.imshow("frame", frame)

        # how to close the OAK inference window / stop inference: CTRL+q or CTRL+c
        if cv2.waitKey(1) == ord('q'):
            break

Your help would be much appreciated!

Happy holidays!

Any ideas, anyone?

Hi @MacAutoPlow you’ll just want to remove visualize=True from rf.detect().

We made some updates that have made that addition unnecessary. I’ll update the comment on the old thread as well.

Here are the docs with the sample code snippet: Luxonis OAK (On Device) - Roboflow

Example with my Face Detection dataset:

from roboflowoak import RoboflowOak
import cv2
import time
import numpy as np

if __name__ == '__main__':
    # instantiating an object (rf) with the RoboflowOak module
    # API Key: https://docs.roboflow.com/rest-api#obtaining-your-api-key
    rf = RoboflowOak(model="face-detection-mik1i", confidence=0.05, overlap=0.5,
    version="15", api_key="XXXXXXXX", rgb=True,
    depth=True, device=None, blocking=True)
    # Running our model and displaying the video output with detections
    while True:
        t0 = time.time()
        # The rf.detect() function runs the model inference
        result, frame, raw_frame, depth = rf.detect()
        predictions = result["predictions"]
        #{
        #    predictions:
        #    [ {
        #        x: (middle),
        #        y:(middle),
        #        width:
        #        height:
        #        depth: ###->
        #        confidence:
        #        class:
        #        mask: {
        #    ]
        #}
        #frame - frame after preprocs, with predictions
        #raw_frame - original frame from your OAK
        #depth - depth map for raw_frame, center-rectified to the center camera
        
        # timing: for benchmarking purposes
        t = time.time()-t0
        print("FPS ", 1/t)
        print("PREDICTIONS ", [p.json() for p in predictions])

        # setting parameters for depth calculation
        # comment out the following 2 lines out if you're using an OAK without Depth
        max_depth = np.amax(depth)
        cv2.imshow("depth", depth/max_depth)
        # displaying the video feed as successive frames
        cv2.imshow("frame", frame)
    
        # how to close the OAK inference window / stop inference: CTRL+q or CTRL+c
        if cv2.waitKey(1) == ord('q'):
            break