Roboflow docker for jetson not support my model

i have problem at my model

i use docker image roboflow/roboflow-inference-server-jetson-4.6.1:latest in my jetson nano board

it work well before i try new model

it work well at version 2 (train at feb )
but it work not well at version3 (train at apr )

is there have option for this at training

following is error log at docker run

Traceback (most recent call last):
File “/app/inference/core/models/roboflow.py”, line 701, in initialize_model
self.onnx_session = onnxruntime.InferenceSession(
File “/usr/local/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py”, line 335, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File “/usr/local/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py”, line 370, in _create_inference_session
sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from /tmp/cache/scanall/3/weights.onnx failed:/home/onnxruntime/onnxruntime/onnxruntime/core/graph/model_load_utils.h:57 void onnxruntime::model_load_utils::ValidateOpsetForDomain(const std::unordered_map<std::__cxx11::basic_string, int>&, const onnxruntime::logging::Logger&, bool, const string&, int) ONNX Runtime only guarantees support for models stamped with official released onnx opset versions. Opset 17 is under development and support for this is limited. The operator schemas and or other functionality may change before next ONNX release and in this case ONNX Runtime will not guarantee backward compatibility. Current official support for domain ai.onnx is till opset 16.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/app/inference/core/interfaces/http/http_api.py”, line 163, in wrapped_route
return await route(*args, **kwargs)
File “/app/inference/core/interfaces/http/http_api.py”, line 1408, in legacy_infer_from_request
self.model_manager.add_model(
File “/app/inference/core/managers/decorators/fixed_size_cache.py”, line 61, in add_model
raise error
File “/app/inference/core/managers/decorators/fixed_size_cache.py”, line 55, in add_model
return super().add_model(model_id, api_key, model_id_alias=model_id_alias)
File “/app/inference/core/managers/decorators/base.py”, line 55, in add_model
self.model_manager.add_model(model_id, api_key, model_id_alias=model_id_alias)
File “/app/inference/core/managers/base.py”, line 60, in add_model
model = self.model_registry.get_model(resolved_identifier, api_key)(
File “/app/inference/core/models/roboflow.py”, line 607, in init
self.initialize_model()
File “/app/inference/core/models/roboflow.py”, line 707, in initialize_model
raise ModelArtefactError(
inference.core.exceptions.ModelArtefactError: Unable to load ONNX session. Cause: [ONNXRuntimeError] : 1 : FAIL : Load model from /tmp/cache/scanall/3/weights.onnx failed:/home/onnxruntime/onnxruntime/onnxruntime/core/graph/model_load_utils.h:57 void onnxruntime::model_load_utils::ValidateOpsetForDomain(const std::unordered_map<std::__cxx11::basic_string, int>&, const onnxruntime::logging::Logger&, bool, const string&, int) ONNX Runtime only guarantees support for models stamped with official released onnx opset versions. Opset 17 is under development and support for this is limited. The operator schemas and or other functionality may change before next ONNX release and in this case ONNX Runtime will not guarantee backward compatibility. Current official support for domain ai.onnx is till opset 16.

Hello there,

Thanks for raising the issue - we had introduced couple of changes into our training process, yet we believed those will be compatible with jetson deployment.

We would be happy to debug the issue, just we need additional context from you:

  • what is the digest of docker image you run (latest tag shifts from image to image)
  • what is the model we are talking about - ideally we would like to know model ids, but if you cannot reveal here - I suggest contact through priv or at least please provide type of the model

Thank you for your reply

  • what is the digest of docker image you run (latest tag shifts from image to image)
    → i try two version : latest and test (for 4.6.1)

  • what is the model we are talking about - ideally we would like to know model ids, but if you cannot reveal here - I suggest contact through priv or at least please provide type of the model
    scanall/1 , scanall/2 work well
    scanall/3 not work

cardscanall/1 , cardscanall/2 not work

Ok, but could you please provide the output for:

docker inspect --format='{{index .RepoDigests 0}}' $IMAGE

as I said - latest tag drifts over time - I would like to check specific artefact that fails on your end

thank you

here is output

jetson@nano:~/soft$ sudo docker inspect --format=‘{{index .RepoDigests 0}}’ roboflow/roboflow-inference-server-jetson-4.6.1:latest
roboflow/roboflow-inference-server-jetson-4.6.1@sha256:668f3ebeaa640458f7d9a2e33ba44cf047b51c99f1285abbf8245ce4a7aecfbe

Ok, verified the problem and seems that the training process changed on our end slightly causing using newer (not supported under jetson 4.x) opset. We will try to fix that and notify once it’s done.

If Docker’s update is delayed, is there an option to learn from a previous version?