Can't run GPU inference server - missing libnvidia-ml.so.1

I get the following error when trying to run the latest GPU version of inference server in Windows:

Image roboflow/roboflow-inference-server-gpu:latest pulled.
Starting inference server container…
400 Client Error for http+docker://localnpipe/v1.47/containers/796db12edaf3e3298640c04226f3a199ac9a686f45707999a8809c443cd9c5f2/start: Bad Request (“failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as ‘legacy’
nvidia-container-cli: initialization error: load library failed: libnvidia-ml.so.1: cannot open shared object file: no such file or directory: unknown”)

The CPU version works fine.

Hi @LouNL ,

Are you starting docker through WSL (Windows Subsystem for Linux)?

Thanks,
Grzegorz