Error deploying custom trained model on NVIDIA Jetson Xavier

@David_Mezey , my first guess is that sometimes the cache gets corrupted. Can you try running without the --mount argument and see if it works? If it does, you can still use the --mount argument, but you’ll want to delete the date in it and start fresh.

@Paul Unfortunately running without the mount argument as follows

sudo docker run --privileged --net=host --runtime=nvidia -e NUM_WORKERS=1 roboflow/roboflow-inference-server-trt-jetson:0.5.4

will also yield error. Here is the full log of the container:

/usr/local/lib/python3.6/dist-packages/clip/clip.py:24: UserWarning: PyTorch version 1.7.1 or higher is recommended
  warnings.warn("PyTorch version 1.7.1 or higher is recommended")
/usr/local/lib/python3.6/dist-packages/boto3/compat.py:88: PythonDeprecationWarning: Boto3 will no longer support Python 3.6 starting May 30, 2022. To continue receiving service updates, bug fixes, and security updates please upgrade to Python 3.7 or later. More information can be found here: https://aws.amazon.com/blogs/developer/python-support-policy-updates-for-aws-sdks-and-tools/
  warnings.warn(warning, PythonDeprecationWarning)
INFO:     Started server process [8]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:9001 (Press CTRL+C to quit)
2023-07-27 14:51:00.011593846 [W:onnxruntime:Default, tensorrt_execution_provider.h:59 log] [2023-07-27 14:51:00 WARNING] external/onnx-tensorrt/onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
2023-07-27 14:51:02.677981223 [E:onnxruntime:, inference_session.cc:1588 operator()] Exception during initialization: /home/onnxruntime/onnxruntime/onnxruntime/core/providers/tensorrt/tensorrt_execution_provider.cc:801 SubGraphCollection_t onnxruntime::TensorrtExecutionProvider::GetSupportedList(SubGraphCollection_t, int, int, const onnxruntime::GraphViewer&, bool*) const [ONNXRuntimeError] : 1 : FAIL : TensorRT input: /model.22/Range_output_0 has no shape specified. Please run shape inference on the onnx model first. Details can be found in https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#shape-inference-for-tensorrt-subgraphs

Downloading model artifacts from Roboflow API
Resize method is 'Stretch to'
Creating inference session
INFO:     127.0.0.1:35462 - "POST /cobe/1?api_key=API_KEY HTTP/1.1" 500 Internal Server Error

Ok, let me try to run this same model on my jetson nano. I’ll reflash to 4.6 and see if I can reproduce this error.

In the meantime, the newer jetson nanos are capable of running jetpack 5.0+ which we have seen success with if that’s an option for you.

I’ll post back here with findings!

@Paul, thank you for the help. We are using this SD card image: https://developer.nvidia.com/embedded/l4t/r32_release_v7.1/jp_4.6.1_b110_sd_card/jeston_nano/jetson-nano-jp461-sd-card-image.zip

Unfortunately changing the hardware wouldn’t be an option. We are working on a public project between science and art and we already purchased the necessary components that we can not change. On the other hand, if you could provide a way to fall back to training Roboflow 2.0 models that could be a solution, as they all (that we tested) seemed to work on the Nano.

Looking forward to your answer, and thanks again!

my env:

nvidia@nvidia-desktop:~/jetsonUtilities$ python jetsonInfo.py 
NVIDIA NVIDIA Orin NX Developer Kit
 L4T 35.3.1 [ JetPack UNKNOWN ]
   Ubuntu 20.04.6 LTS
   Kernel Version: 5.10.104-tegra
 CUDA 11.4.315
   CUDA Architecture: 8.7
 OpenCV version: 4.9.0
   OpenCV Cuda: YES
 CUDNN: 8.6.0.166
 TensorRT: 8.5.2.2
 Vision Works: NOT_INSTALLED
 VPI: 2.2.7
 Vulcan: 1.3.204

error for cpu:

nvidia@nvidia-desktop:~/jetsonUtilities$ sudo docker run -p 9001:9001 roboflow/roboflow-inference-server-cpu:latest

SupervisionWarnings: BoxAnnotator is deprecated: `BoxAnnotator` is deprecated and will be removed in `supervision-0.22.0`. Use `BoundingBoxAnnotator` and `LabelAnnotator` instead
INFO:     Started server process [7]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:9001 (Press CTRL+C to quit)
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 715, in urlopen
    httplib_response = self._make_request(
  File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 404, in _make_request
    self._validate_conn(conn)
  File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 1058, in _validate_conn
    conn.connect()
  File "/usr/local/lib/python3.9/site-packages/urllib3/connection.py", line 419, in connect
    self.sock = ssl_wrap_socket(
  File "/usr/local/lib/python3.9/site-packages/urllib3/util/ssl_.py", line 449, in ssl_wrap_socket
    ssl_sock = _ssl_wrap_socket_impl(
  File "/usr/local/lib/python3.9/site-packages/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl
    return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
  File "/usr/local/lib/python3.9/ssl.py", line 501, in wrap_socket
    return self.sslsocket_class._create(
  File "/usr/local/lib/python3.9/ssl.py", line 1074, in _create
    self.do_handshake()
  File "/usr/local/lib/python3.9/ssl.py", line 1343, in do_handshake
    self._sslobj.do_handshake()
ConnectionResetError: [Errno 104] Connection reset by peer

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/requests/adapters.py", line 486, in send
    resp = conn.urlopen(
  File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 799, in urlopen
    retries = retries.increment(
  File "/usr/local/lib/python3.9/site-packages/urllib3/util/retry.py", line 550, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/usr/local/lib/python3.9/site-packages/urllib3/packages/six.py", line 769, in reraise
    raise value.with_traceback(tb)
  File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 715, in urlopen
    httplib_response = self._make_request(
  File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 404, in _make_request
    self._validate_conn(conn)
  File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 1058, in _validate_conn
    conn.connect()
  File "/usr/local/lib/python3.9/site-packages/urllib3/connection.py", line 419, in connect
    self.sock = ssl_wrap_socket(
  File "/usr/local/lib/python3.9/site-packages/urllib3/util/ssl_.py", line 449, in ssl_wrap_socket
    ssl_sock = _ssl_wrap_socket_impl(
  File "/usr/local/lib/python3.9/site-packages/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl
    return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
  File "/usr/local/lib/python3.9/ssl.py", line 501, in wrap_socket
    return self.sslsocket_class._create(
  File "/usr/local/lib/python3.9/ssl.py", line 1074, in _create
    self.do_handshake()
  File "/usr/local/lib/python3.9/ssl.py", line 1343, in do_handshake
    self._sslobj.do_handshake()
urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/inference/core/roboflow_api.py", line 78, in wrapper
    return function(*args, **kwargs)
  File "/app/inference/core/roboflow_api.py", line 353, in get_from_url
    return _get_from_url(url=url, json_response=json_response)
  File "/app/inference/core/roboflow_api.py", line 357, in _get_from_url
    response = requests.get(wrap_url(url))
  File "/usr/local/lib/python3.9/site-packages/requests/api.py", line 73, in get
    return request("get", url, params=params, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/requests/api.py", line 59, in request
    return session.request(method=method, url=url, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/requests/sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/local/lib/python3.9/site-packages/requests/sessions.py", line 703, in send
    r = adapter.send(request, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/requests/adapters.py", line 501, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/app/inference/core/interfaces/http/http_api.py", line 163, in wrapped_route
    return await route(*args, **kwargs)
  File "/app/inference/core/interfaces/http/http_api.py", line 1408, in legacy_infer_from_request
    self.model_manager.add_model(
  File "/app/inference/core/managers/decorators/fixed_size_cache.py", line 61, in add_model
    raise error
  File "/app/inference/core/managers/decorators/fixed_size_cache.py", line 55, in add_model
    return super().add_model(model_id, api_key, model_id_alias=model_id_alias)
  File "/app/inference/core/managers/decorators/base.py", line 55, in add_model
    self.model_manager.add_model(model_id, api_key, model_id_alias=model_id_alias)
  File "/app/inference/core/managers/base.py", line 60, in add_model
    model = self.model_registry.get_model(resolved_identifier, api_key)(
  File "/app/inference/models/vit/vit_classification.py", line 26, in __init__
    super().__init__(*args, **kwargs)
  File "/app/inference/core/models/classification_base.py", line 40, in __init__
    super().__init__(*args, **kwargs)
  File "/app/inference/core/models/roboflow.py", line 607, in __init__
    self.initialize_model()
  File "/app/inference/core/models/roboflow.py", line 692, in initialize_model
    self.get_model_artifacts()
  File "/app/inference/core/models/roboflow.py", line 221, in get_model_artifacts
    self.cache_model_artefacts()
  File "/app/inference/core/models/roboflow.py", line 231, in cache_model_artefacts
    self.download_model_artifacts_from_roboflow_api()
  File "/app/inference/core/models/roboflow.py", line 287, in download_model_artifacts_from_roboflow_api
    environment = get_from_url(api_data["environment"])
  File "/app/inference/core/roboflow_api.py", line 80, in wrapper
    raise RoboflowAPIConnectionError(
inference.core.exceptions.RoboflowAPIConnectionError: Could not connect to Roboflow API.
INFO:     172.17.0.1:47956 - "POST /hl-hif5j/1?api_key=xxxxxx HTTP/1.1" 503 Service Unavailable

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 715, in urlopen
    httplib_response = self._make_request(
  File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 404, in _make_request
    self._validate_conn(conn)
  File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 1058, in _validate_conn
    conn.connect()
  File "/usr/local/lib/python3.9/site-packages/urllib3/connection.py", line 419, in connect
    self.sock = ssl_wrap_socket(
  File "/usr/local/lib/python3.9/site-packages/urllib3/util/ssl_.py", line 449, in ssl_wrap_socket
    ssl_sock = _ssl_wrap_socket_impl(
  File "/usr/local/lib/python3.9/site-packages/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl
    return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
  File "/usr/local/lib/python3.9/ssl.py", line 501, in wrap_socket
    return self.sslsocket_class._create(
  File "/usr/local/lib/python3.9/ssl.py", line 1074, in _create
    self.do_handshake()
  File "/usr/local/lib/python3.9/ssl.py", line 1343, in do_handshake
    self._sslobj.do_handshake()
ConnectionResetError: [Errno 104] Connection reset by peer

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/requests/adapters.py", line 486, in send
    resp = conn.urlopen(
  File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 799, in urlopen
    retries = retries.increment(
  File "/usr/local/lib/python3.9/site-packages/urllib3/util/retry.py", line 550, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/usr/local/lib/python3.9/site-packages/urllib3/packages/six.py", line 769, in reraise
    raise value.with_traceback(tb)
  File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 715, in urlopen
    httplib_response = self._make_request(
  File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 404, in _make_request
    self._validate_conn(conn)
  File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 1058, in _validate_conn
    conn.connect()
  File "/usr/local/lib/python3.9/site-packages/urllib3/connection.py", line 419, in connect
    self.sock = ssl_wrap_socket(
  File "/usr/local/lib/python3.9/site-packages/urllib3/util/ssl_.py", line 449, in ssl_wrap_socket
    ssl_sock = _ssl_wrap_socket_impl(
  File "/usr/local/lib/python3.9/site-packages/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl
    return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
  File "/usr/local/lib/python3.9/ssl.py", line 501, in wrap_socket
    return self.sslsocket_class._create(
  File "/usr/local/lib/python3.9/ssl.py", line 1074, in _create
    self.do_handshake()
  File "/usr/local/lib/python3.9/ssl.py", line 1343, in do_handshake
    self._sslobj.do_handshake()
urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/inference/core/roboflow_api.py", line 78, in wrapper
    return function(*args, **kwargs)
  File "/app/inference/core/roboflow_api.py", line 353, in get_from_url
    return _get_from_url(url=url, json_response=json_response)
  File "/app/inference/core/roboflow_api.py", line 357, in _get_from_url
    response = requests.get(wrap_url(url))
  File "/usr/local/lib/python3.9/site-packages/requests/api.py", line 73, in get
    return request("get", url, params=params, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/requests/api.py", line 59, in request
    return session.request(method=method, url=url, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/requests/sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/local/lib/python3.9/site-packages/requests/sessions.py", line 703, in send
    r = adapter.send(request, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/requests/adapters.py", line 501, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/app/inference/core/interfaces/http/http_api.py", line 163, in wrapped_route
    return await route(*args, **kwargs)
  File "/app/inference/core/interfaces/http/http_api.py", line 1408, in legacy_infer_from_request
    self.model_manager.add_model(
  File "/app/inference/core/managers/decorators/fixed_size_cache.py", line 61, in add_model
    raise error
  File "/app/inference/core/managers/decorators/fixed_size_cache.py", line 55, in add_model
    return super().add_model(model_id, api_key, model_id_alias=model_id_alias)
  File "/app/inference/core/managers/decorators/base.py", line 55, in add_model
    self.model_manager.add_model(model_id, api_key, model_id_alias=model_id_alias)
  File "/app/inference/core/managers/base.py", line 60, in add_model
    model = self.model_registry.get_model(resolved_identifier, api_key)(
  File "/app/inference/models/vit/vit_classification.py", line 26, in __init__
    super().__init__(*args, **kwargs)
  File "/app/inference/core/models/classification_base.py", line 40, in __init__
    super().__init__(*args, **kwargs)
  File "/app/inference/core/models/roboflow.py", line 607, in __init__
    self.initialize_model()
  File "/app/inference/core/models/roboflow.py", line 692, in initialize_model
    self.get_model_artifacts()
  File "/app/inference/core/models/roboflow.py", line 221, in get_model_artifacts
    self.cache_model_artefacts()
  File "/app/inference/core/models/roboflow.py", line 231, in cache_model_artefacts
    self.download_model_artifacts_from_roboflow_api()
  File "/app/inference/core/models/roboflow.py", line 287, in download_model_artifacts_from_roboflow_api
    environment = get_from_url(api_data["environment"])
  File "/app/inference/core/roboflow_api.py", line 80, in wrapper
    raise RoboflowAPIConnectionError(
inference.core.exceptions.RoboflowAPIConnectionError: Could not connect to Roboflow API.
INFO:     172.17.0.1:47968 - "POST /hl-hif5j/1?api_key=xxxxxxx HTTP/1.1" 503 Service Unavailable

error for jetson

sudo docker run --privileged --net=host --runtime=nvidia \
> --mount source=roboflow,target=/tmp/cache -e NUM_WORKERS=1 \
> roboflow/roboflow-inference-server-jetson-5.1.1:latest

UserWarning: Unable to import Axes3D. This may be due to multiple versions of Matplotlib being installed (e.g. as a system package and as a pip package). As a result, the 3D projection is not available.
SupervisionWarnings: BoxAnnotator is deprecated: `BoxAnnotator` is deprecated and will be removed in `supervision-0.22.0`. Use `BoundingBoxAnnotator` and `LabelAnnotator` instead
INFO:     Started server process [20]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:9001 (Press CTRL+C to quit)
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/urllib3/connectionpool.py", line 715, in urlopen
    httplib_response = self._make_request(
  File "/usr/local/lib/python3.9/dist-packages/urllib3/connectionpool.py", line 404, in _make_request
    self._validate_conn(conn)
  File "/usr/local/lib/python3.9/dist-packages/urllib3/connectionpool.py", line 1058, in _validate_conn
    conn.connect()
  File "/usr/local/lib/python3.9/dist-packages/urllib3/connection.py", line 419, in connect
    self.sock = ssl_wrap_socket(
  File "/usr/local/lib/python3.9/dist-packages/urllib3/util/ssl_.py", line 449, in ssl_wrap_socket
    ssl_sock = _ssl_wrap_socket_impl(
  File "/usr/local/lib/python3.9/dist-packages/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl
    return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
  File "/usr/lib/python3.9/ssl.py", line 500, in wrap_socket
    return self.sslsocket_class._create(
  File "/usr/lib/python3.9/ssl.py", line 1040, in _create
    self.do_handshake()
  File "/usr/lib/python3.9/ssl.py", line 1309, in do_handshake
    self._sslobj.do_handshake()
ConnectionResetError: [Errno 104] Connection reset by peer

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/requests/adapters.py", line 486, in send
    resp = conn.urlopen(
  File "/usr/local/lib/python3.9/dist-packages/urllib3/connectionpool.py", line 799, in urlopen
    retries = retries.increment(
  File "/usr/local/lib/python3.9/dist-packages/urllib3/util/retry.py", line 550, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/usr/local/lib/python3.9/dist-packages/urllib3/packages/six.py", line 769, in reraise
    raise value.with_traceback(tb)
  File "/usr/local/lib/python3.9/dist-packages/urllib3/connectionpool.py", line 715, in urlopen
    httplib_response = self._make_request(
  File "/usr/local/lib/python3.9/dist-packages/urllib3/connectionpool.py", line 404, in _make_request
    self._validate_conn(conn)
  File "/usr/local/lib/python3.9/dist-packages/urllib3/connectionpool.py", line 1058, in _validate_conn
    conn.connect()
  File "/usr/local/lib/python3.9/dist-packages/urllib3/connection.py", line 419, in connect
    self.sock = ssl_wrap_socket(
  File "/usr/local/lib/python3.9/dist-packages/urllib3/util/ssl_.py", line 449, in ssl_wrap_socket
    ssl_sock = _ssl_wrap_socket_impl(
  File "/usr/local/lib/python3.9/dist-packages/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl
    return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
  File "/usr/lib/python3.9/ssl.py", line 500, in wrap_socket
    return self.sslsocket_class._create(
  File "/usr/lib/python3.9/ssl.py", line 1040, in _create
    self.do_handshake()
  File "/usr/lib/python3.9/ssl.py", line 1309, in do_handshake
    self._sslobj.do_handshake()
urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/inference/core/roboflow_api.py", line 78, in wrapper
    return function(*args, **kwargs)
  File "/app/inference/core/roboflow_api.py", line 353, in get_from_url
    return _get_from_url(url=url, json_response=json_response)
  File "/app/inference/core/roboflow_api.py", line 357, in _get_from_url
    response = requests.get(wrap_url(url))
  File "/usr/local/lib/python3.9/dist-packages/requests/api.py", line 73, in get
    return request("get", url, params=params, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/requests/api.py", line 59, in request
    return session.request(method=method, url=url, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/requests/sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/local/lib/python3.9/dist-packages/requests/sessions.py", line 703, in send
    r = adapter.send(request, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/requests/adapters.py", line 501, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/app/inference/core/interfaces/http/http_api.py", line 163, in wrapped_route
    return await route(*args, **kwargs)
  File "/app/inference/core/interfaces/http/http_api.py", line 1408, in legacy_infer_from_request
    self.model_manager.add_model(
  File "/app/inference/core/managers/decorators/fixed_size_cache.py", line 61, in add_model
    raise error
  File "/app/inference/core/managers/decorators/fixed_size_cache.py", line 55, in add_model
    return super().add_model(model_id, api_key, model_id_alias=model_id_alias)
  File "/app/inference/core/managers/decorators/base.py", line 55, in add_model
    self.model_manager.add_model(model_id, api_key, model_id_alias=model_id_alias)
  File "/app/inference/core/managers/base.py", line 60, in add_model
    model = self.model_registry.get_model(resolved_identifier, api_key)(
  File "/app/inference/models/vit/vit_classification.py", line 26, in __init__
    super().__init__(*args, **kwargs)
  File "/app/inference/core/models/classification_base.py", line 40, in __init__
    super().__init__(*args, **kwargs)
  File "/app/inference/core/models/roboflow.py", line 607, in __init__
    self.initialize_model()
  File "/app/inference/core/models/roboflow.py", line 692, in initialize_model
    self.get_model_artifacts()
  File "/app/inference/core/models/roboflow.py", line 221, in get_model_artifacts
    self.cache_model_artefacts()
  File "/app/inference/core/models/roboflow.py", line 231, in cache_model_artefacts
    self.download_model_artifacts_from_roboflow_api()
  File "/app/inference/core/models/roboflow.py", line 287, in download_model_artifacts_from_roboflow_api
    environment = get_from_url(api_data["environment"])
  File "/app/inference/core/roboflow_api.py", line 80, in wrapper
    raise RoboflowAPIConnectionError(
inference.core.exceptions.RoboflowAPIConnectionError: Could not connect to Roboflow API.
INFO:     127.0.0.1:36410 - "POST /hl-hif5j/1?api_key= xxxxxxgd1Eccn HTTP/1.1" 503 Service Unavailable



and this also failed:


nvidia@nvidia-desktop:~/jetsonUtilities$ sudo docker run --privileged --net=host --runtime=nvidia --mount source=roboflow,target=/tmp/cache -e NUM_WORKERS=1 roboflow/roboflow-inference-server-trt-jetson-5.1.1:latest
Traceback (most recent call last):
  File "/usr/local/bin/uvicorn", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/uvicorn/main.py", line 410, in main
    run(
  File "/usr/local/lib/python3.8/dist-packages/uvicorn/main.py", line 578, in run
    server.run()
  File "/usr/local/lib/python3.8/dist-packages/uvicorn/server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/usr/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
  File "/usr/local/lib/python3.8/dist-packages/uvicorn/server.py", line 68, in serve
    config.load()
  File "/usr/local/lib/python3.8/dist-packages/uvicorn/config.py", line 473, in load
    self.loaded_app = import_from_string(self.app)
  File "/usr/local/lib/python3.8/dist-packages/uvicorn/importer.py", line 24, in import_from_string
    raise exc from None
  File "/usr/local/lib/python3.8/dist-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 848, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/app/trt_http.py", line 10, in <module>
    from inference.models.utils import ROBOFLOW_MODEL_TYPES
  File "/app/inference/models/__init__.py", line 16, in <module>
    from inference.models.vit import VitClassification
  File "/app/inference/models/vit/__init__.py", line 1, in <module>
    from inference.models.vit.vit_classification import VitClassification
  File "/app/inference/models/vit/vit_classification.py", line 2, in <module>
    from inference.core.models.classification_base import (
  File "/app/inference/core/models/classification_base.py", line 16, in <module>
    from inference.core.models.roboflow import OnnxRoboflowInferenceModel
  File "/app/inference/core/models/roboflow.py", line 10, in <module>
    import cv2
ImportError: /lib/aarch64-linux-gnu/libgomp.so.1: cannot allocate memory in static TLS block

it’s ok:

base64 he1--.png | curl -d @- \
> "https://detect.roboflow.com/hl-hif5j/1?api_key=xxxxx"

{"time":0.17959950200008734,"image":{"width":480,"height":640},"predictions":{"abnormal":{"confidence":0.5574895143508911},"normal":{"confidence":0.48462098836898804}},"predicted_classes":["abnormal"]

The reason is that The network provider forbids the roboflow api server.