i have problem at my model
i use docker image roboflow/roboflow-inference-server-jetson-4.6.1:latest in my jetson nano board
it work well before i try new model
it work well at version 2 (train at feb )
but it work not well at version3 (train at apr )
is there have option for this at training
following is error log at docker run
Traceback (most recent call last):
File “/app/inference/core/models/roboflow.py”, line 701, in initialize_model
self.onnx_session = onnxruntime.InferenceSession(
File “/usr/local/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py”, line 335, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File “/usr/local/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py”, line 370, in _create_inference_session
sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from /tmp/cache/scanall/3/weights.onnx failed:/home/onnxruntime/onnxruntime/onnxruntime/core/graph/model_load_utils.h:57 void onnxruntime::model_load_utils::ValidateOpsetForDomain(const std::unordered_map<std::__cxx11::basic_string, int>&, const onnxruntime::logging::Logger&, bool, const string&, int) ONNX Runtime only guarantees support for models stamped with official released onnx opset versions. Opset 17 is under development and support for this is limited. The operator schemas and or other functionality may change before next ONNX release and in this case ONNX Runtime will not guarantee backward compatibility. Current official support for domain ai.onnx is till opset 16.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File “/app/inference/core/interfaces/http/http_api.py”, line 163, in wrapped_route
return await route(*args, **kwargs)
File “/app/inference/core/interfaces/http/http_api.py”, line 1408, in legacy_infer_from_request
self.model_manager.add_model(
File “/app/inference/core/managers/decorators/fixed_size_cache.py”, line 61, in add_model
raise error
File “/app/inference/core/managers/decorators/fixed_size_cache.py”, line 55, in add_model
return super().add_model(model_id, api_key, model_id_alias=model_id_alias)
File “/app/inference/core/managers/decorators/base.py”, line 55, in add_model
self.model_manager.add_model(model_id, api_key, model_id_alias=model_id_alias)
File “/app/inference/core/managers/base.py”, line 60, in add_model
model = self.model_registry.get_model(resolved_identifier, api_key)(
File “/app/inference/core/models/roboflow.py”, line 607, in init
self.initialize_model()
File “/app/inference/core/models/roboflow.py”, line 707, in initialize_model
raise ModelArtefactError(
inference.core.exceptions.ModelArtefactError: Unable to load ONNX session. Cause: [ONNXRuntimeError] : 1 : FAIL : Load model from /tmp/cache/scanall/3/weights.onnx failed:/home/onnxruntime/onnxruntime/onnxruntime/core/graph/model_load_utils.h:57 void onnxruntime::model_load_utils::ValidateOpsetForDomain(const std::unordered_map<std::__cxx11::basic_string, int>&, const onnxruntime::logging::Logger&, bool, const string&, int) ONNX Runtime only guarantees support for models stamped with official released onnx opset versions. Opset 17 is under development and support for this is limited. The operator schemas and or other functionality may change before next ONNX release and in this case ONNX Runtime will not guarantee backward compatibility. Current official support for domain ai.onnx is till opset 16.