Webcam Inference does not work with Jetson Nano inference server

Hello,

I have currently set up an NVIDIA Jetson Nano with Roboflow inference server by following this document. The Jetson Nano is flashed with JetPack 4.6.

When I try to run the below example on an image, it works well:

base64 YOUR_IMAGE.jpg | curl -d @- \
"http://localhost:9001/your-model/42?api_key=YOUR_KEY"

Then I try to run the below example to do webcam inference:

However, it returns the following error:

Traceback (most recent call last):
  File "infer-simple.py", line 71, in <module>
    cv2.imshow('image', image)
cv2.error: OpenCV(4.5.5) /io/opencv/modules/highgui/src/window.cpp:1000: error: (-215:Assertion failed) size.width>0 && size.height>0 in function 'imshow'

The output log on Jetson Nano shows that the model is loaded:

initializing...
inference-server is ready to receive traffic.
Downloading weights for yellow-flowers/1
requesting from roboflow server...
Weights downloaded in 2.13 seconds
Initializing model...
2022-03-31 11:45:30.612895: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2022-03-31 11:45:35.930139: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
This model execution did not contain any nodes with control flow or dynamic output shapes. You can use model.execute() instead.
Model prepared in 9.24 seconds

I have already replaced “line 22” of the code I shared before (infer-simple.py) to point to the Jetson inference server.

However, when I change “line 22” back to use Roboflow cloud server, the example runs well.

I have also tried to print out “infer()” for debugging in the above code:

print(infer())

And it printed “None”, which means result image was not parsed.

Please advise on how to resolve this issue.

Appreciate it!
Thank you.

Hi @lakshanthad

We had a someone post a similar issue earlier. They sent in a DM stating it was due to them misformating the JSON inputs. Does this sound like something that may have happened here as well?

Hi mate as Kelly said I was having the same problem today.

That cv2 error arises when there wasn’t an image for cv2 to read, so assuming you have your webcam plugged in you’re problem is almost definitely that you didn’t fill in the config JSON file correctly.

In my case for inputting my model when customizing the JSON I didn’t understand the formatting and put it in wrong, therefore when the program tries to access the model on the website, the website response would return a NoneType, and cv2 can’t read non existent data, hence the error!

For the ‘model’ input the template example in the JSON reads “xx-model-#” but in reality if your model is named “cats” and it’s the third version you want the model input should be “cats/3”

1 Like

Thank you all for the responses @anon59033456 @Vulcan88!

Actually my JSON is fine. The model input is set as “<model_name>/<version_number>”.

As I have stated earlier, it works well when using cloud hosted inference server. The problem only arises when using Jetson inference server.

The model is downloaded properly because I can see this output on Jetson:

initializing...
inference-server is ready to receive traffic.
Downloading weights for yellow-flowers/1
requesting from roboflow server...
Weights downloaded in 2.13 seconds
Initializing model...
2022-03-31 11:45:30.612895: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2022-03-31 11:45:35.930139: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
This model execution did not contain any nodes with control flow or dynamic output shapes. You can use model.execute() instead.
Model prepared in 9.24 seconds

Also, here is output of starting inference server on Jetson for your further reference:

lakshanthad@lakshanthad-desktop:~$ sudo docker run --net=host --gpus all roboflow/inference-server:jetson
2022-04-01T03:22:41: PM2 log: Launching in no daemon mode
2022-04-01T03:22:41: PM2 log: [Watch] Start watching inference-server
2022-04-01T03:22:41: PM2 log: App [inference-server:0] starting in -cluster mode-
2022-04-01T03:22:41: PM2 log: App [inference-server:0] online
Platform node has already been set. Overwriting the platform with node.
2022-04-01 03:22:43.124687: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2022-04-01 03:22:43.230701: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2022-04-01 03:22:43.237283: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2022-04-01 03:22:43.237417: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties: 
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216
pciBusID: 0000:00:00.0
2022-04-01 03:22:43.237447: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2022-04-01 03:22:43.240429: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2022-04-01 03:22:43.242820: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2022-04-01 03:22:43.243582: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2022-04-01 03:22:43.247126: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2022-04-01 03:22:43.251591: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2022-04-01 03:22:43.260614: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2022-04-01 03:22:43.260799: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2022-04-01 03:22:43.261057: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2022-04-01 03:22:43.261147: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0
2022-04-01 03:22:45.955380: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1180] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-04-01 03:22:45.955437: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1186]      0 
2022-04-01 03:22:45.955454: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 0:   N 
2022-04-01 03:22:45.955675: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2022-04-01 03:22:45.955915: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2022-04-01 03:22:45.956134: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2022-04-01 03:22:45.956248: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
2022-04-01 03:22:45.957622: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 352 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
initializing...
inference-server is ready to receive traffic.

From the above output, you can see there is a support issue

I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero

Could this be the main issue?

Appreciate the help!

Thank you.

It might be. I found this answer to what looks like a very similar issue.

Your camera device cannot be properly instantiated (somewhere in the VideoStream class, I assume). In consequence the image is None and this leads to the follow-up error. You need to check if you camera is properly working on the Nano and if you choose the correct video device index. If it is a camera you’ve connected via CSI interface, you need to use the Gstreamer interface of the OpenCV video capture.

Hello Kelly,

Thank you for the response. According to the issue you shared, it seems like the camera is not properly configured. However, in my case, I have properly set up the camera.

Can I know whether your team has tested Jetson Inference Server along with the webcam example I shared before? I am starting to think that this is an underlying issue on the Jetson inference server itself.

Thank you.

Thanks, @lakshanthad

I’ve asked the wider team about this issue. I am seeing a " We are experiencing a temporary outage affecting downloads and documentation. Thanks for your patience." notice on their forums. I’m not sure if this has anything to do with it, but will be circling back once I have more insight from our team

Yes, it has. Is this specifically a problem with video inference? Or does inferring on a single image with your the Jetson Inference Server setup also not work?

Thank you @anon59033456.

Hello @brad,

It works on a single image. The problem is only with webcam inference.

The following is what I tried:

This worked:

sudo docker pull roboflow/inference-server:jetson
sudo docker run --net=host --gpus all roboflow/inference-server:jetson
base64 YOUR_IMAGE.jpg | curl -d @- \
"http://localhost:9001/your-model/42?api_key=YOUR_KEY"

This did not work:

git clone https://github.com/roboflow-ai/roboflow-api-snippets
cd Python/webcam
# Change line22 of infer.py to IP address of Jetson device
python3 infer-simple.py

Appreciate the help

Thank you.

Hello @lakshanthad - interesting…

So the curl post works and I can see that your model is initializing in the server logs

The NUMA error is benign

I think the first step to debug would be to edit the post request in the video infer script to more closely reflect the curl post.

1 Like