I’ve been following the “set up inference server” roboflow tutorial (X9jt8qb_igo) to the letter, trying to get local inferencing from a roboflow model working on the jetson nano dev kit.
I setup things OK on the jetson nano 4GB using JetPack 4.6.1, and have the local inference server running on the jetson with docker with the following command:
docker run --net=host --gpus all roboflow/inference-server:jetson
The server appears to run OK. In another terminal, I’m trying to run a test inference on a local image using the following command (from the tutorial):
Initially, on the device I see that it can load the model OK, but then I get an impassable error.
Can someone please assist with this issue or attempt to reproduce it? As-is, the getting started tutorial does not work out of the box as a result with the jetson nano.
@NickJSI As mentioned above, your issue might be the same as ours. You got the same error message we got when following the written docs on a Jetson Nano trying to deploy Roboflow 3.0 models.