Maximizing FOV with "Deploy to luconis OAK"

Hi im new with roboflow, i already trained a succesfull model that work for my robotics proyect. The problem comes when im taking this model and I deploy it to my oak 1 device, when I do this the fov of my camera is getting quite smaller.

I understand why this is happening but I need the full fov, so I searched and I found a post that talks how to take the full fov by losing accuracy.

https://docs.luxonis.com/projects/api/en/latest/tutorials/maximize_fov/

Is there any way I can take that " Letterboxing" feature and make it work on my oak 1 with my roboflow model.

Im using this basic code to upload my model:

from roboflowoak import RoboflowOak
import cv2
import time
import numpy as np
from depthai_sdk import OakCamera

from depthai_sdk import OakCamera

if name == ‘main’:
# instantiating an object (rf) with the RoboflowOak module
rf = RoboflowOak(model=“”, confidence=0.5, overlap=0.5,
version=“”, api_key=“”, rgb=True,
depth=False, device=None, blocking=True)
# Running our model and displaying the video output with detections
while True:
t0 = time.time()
# The rf.detect() function runs the model inference
result, frame, raw_frame, depth = rf.detect()
predictions = result[“predictions”]
# {
# predictions:
# [ {
# x: (middle),
# y:(middle),
# width:
# height:
# depth: ###->
# confidence:
# class:
# mask: {
# ]
# }
# frame - frame after preprocs, with predictions
# raw_frame - original frame from your OAK
# depth - depth map for raw_frame, center-rectified to the center camera

    # timing: for benchmarking purposes
    t = time.time() - t0
    print("FPS ", 1 / t)
    print("PREDICTIONS ", [p.json() for p in predictions])

    # setting parameters for depth calculation
    # comment out the following 2 lines out if you're using an OAK without Depth
    #max_depth = np.amax(depth)
    #cv2.imshow("depth", depth / max_depth)
    # displaying the video feed as successive frames
    cv2.imshow("frame", frame)

    # how to close the OAK inference window / stop inference: CTRL+q or CTRL+c
    if cv2.waitKey(1) == ord('q'):
        break

You can use the advanced config which allows for full video frame fov. you can do that by passing the following into RoboflowOak

advanced_config = {“wide_fov”: true}

1 Like

thats so much Trevorhlynn, I am really greatfull.

Is there any document where I could search for more useful configurations?

Mostly you will want to be very knowledgable within the Luxonis documentation! If you run into more issues, please let us know

1 Like

there is one more issue I just encounter, if I use wide fov config (1920x1080) the code is getting around 4 times slower in comparison with the normal configuration (614x614), is there any way of taking the big fov but reducing the operational cost?, im thinking in something like keeping the 16:9 ratio but with a smaller resolution. :face_with_raised_eyebrow:

@Maxwell do you have any ideas here?

Yes, the slow down is expected. In order to utilize the full FPV, frames must be resized to the model dimensions which requires another step in the Depth AI pipeline. You might want to experiment with lower sensor resolution modes to improve this speed but unfortunately there isn’t much more we can do. For full control over the cv pipeline you should use the DepthAI SDK with its built in Roboflow integration Roboflow Integration — DepthAI SDK Docs 1.13.1 documentation. Here is all the advanced configuration options for your reference.

advanced_config = {"wide_fov": True, "sensor_mode": "THE_720_P", "night_vision": True}

Also be aware that when using the advanced config, because of the extra step we have to match frames across different pipeline timings which sometimes results in empty frames and we recommend a safe guard such as below:

if frame.any():
            cv2.imshow("frame", frame)
            cv2.imshow("raw frame", raw_frame)
        else:
            print("no frame")

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.