Error at Inference Server Start

Hi,

Attempting to deploy a model to my laptop with NVIDIA RTX 2000 Ada Generation GPU. Having trouble at command:

inference server start

> inference server start
GPU detected. Using a GPU image.
Pulling image: roboflow/roboflow-inference-server-gpu:latest

500 Server Error for http+docker://localnpipe/v1.46/images/create?tag=latest&fromImage=roboflow%2Froboflow-inference-server-gpu: Internal Server Error ("Head "https://registry-1.docker.io/v2/r*strong text*oboflow/roboflow-inference-server-gpu/manifests/latest": unauthorized: incorrect username or password")

The AI bot says I need to install nvidia docker runtime? It looks like this is some kind of password issue? Any help most appreciated

I don’t think I’ve ever seen this error. Looks like maybe your Docker environment on that machine is messed up.

Does the CUDA sample container work?

sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi

Turns out i needed to verify my docker acct email, and that did the trick!
Thanks

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.