Can't run semantic segmentation model on Nvidia Jetson

Please answer swiftly as this step will be crucial for my team to race in a collegiate competition.

We are trying to follow these instructions (NVIDIA Jetson | Roboflow Docs) to run our semantic segmentation model (dsc190 road detection Semantic Segmentation Dataset and Pre-Trained Model by dsc190) on an NVIDIA Jetson.

We opened two shells.

In shell 1, we executed the Docker shell using the following command:

sudo docker run --privileged --net=host --runtime=nvidia
–mount source=roboflow,target=/tmp/cache -e NUM_WORKERS=1

In shell 2, we tried to make an inference on an image

base64 YOUR_IMAGE.jpg | curl -d @- “http://localhost:9001/dsc190-road-detection/10?api_key=”(OUR API KEY)"

After running the inference on an image, we got this error in shell 1:

Traceback (most recent call last):
File “/app/inference/core/”, line 78, in wrapper
return function(*args, **kwargs)
File “/app/inference/core/”, line 198, in get_roboflow_model_data
api_data = _get_from_url(url=api_url)
File “/app/inference/core/”, line 355, in _get_from_url
File “/app/inference/core/utils/”, line 15, in api_key_safe_raise_for_status
File “/usr/local/lib/python3.9/dist-packages/requests/”, line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url:***y5

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/app/inference/core/interfaces/http/”, line 163, in wrapped_route
return await route(*args, kwargs)
File “/app/inference/core/interfaces/http/”, line 1401, in legacy_infer_from_request
File “/app/inference/core/managers/decorators/”, line 61, in add_model
raise error
File “/app/inference/core/managers/decorators/”, line 55, in add_model
return super().add_model(model_id, api_key, model_id_alias=model_id_alias)
File “/app/inference/core/managers/decorators/”, line 55, in add_model
self.model_manager.add_model(model_id, api_key, model_id_alias=model_id_alias)
File “/app/inference/core/managers/”, line 60, in add_model
model = self.model_registry.get_model(resolved_identifier, api_key)(
File “/app/inference/core/registries/”, line 61, in get_model
model_type = get_model_type(model_id, api_key)
File “/app/inference/core/registries/”, line 113, in get_model_type
api_data = get_roboflow_model_data(
File “/app/inference/core/”, line 91, in wrapper
File “/app/inference/core/”, line 58, in
401: lambda e: raise_from_lambda(
File “/app/inference/core/”, line 54, in raise_from_lambda
raise exception_type(message) from inner_error
inference.core.exceptions.RoboflowAPINotAuthorizedError: Unauthorized access to roboflow API - check API key. Visit Authentication | Roboflow Docs to learn how to retrieve one.
INFO: - "POST /dsc190-road-detection/10?api_key=X
5 HTTP/1.1" 401 Unauthorized

and this error in shell 2:

{“message”:“Unauthorized access to roboflow API - check API key and make sure the key is valid for workspace you use. Visit Authentication | Roboflow Docs to learn how to retrieve one.”}

We checked many times to make sure the API key matched the one that was in our workspace so we do not think this is the issue. We also checked that the Roboflow semantic segmentation model could run on the Nvidia Jetson. What could potentially be wrong with what we did?

Hi @devin

Could you share whether the issue was able to be replicated with the Hosted Inference API or our open-source Inference package?

We experienced this issue with these instructions: NVIDIA Jetson | Roboflow Docs

We couldn’t get pass step #3. Thank you for responding.

Hey @devin

Were you able to reproduce this issue with the Hosted Inference API or the inference package? The reason I ask is because the error code you provided suggests an incorrect or missing API key.

Could you double check your API key that you’re using by making sure it matches the one found in your account settings (Under Roboflow API)? If it does match, try using that API key with our hosted inference.

This helps us determine whether this is issue is: an incorrect API key, a correct but broken API key, or a problem in our NVIDIA Jetson package.