Update on my end - following some guidance in one of those links, I looked at the NVIDIA matrix for CUDA version use. It said on my âolderâ GPU generation, I should be using CUDA 11.8. So I uninstalled 12.6 and loaded that up and now I have the below errors.
These errors seem to indicate two things. First, that maybe I need to be on a 12.x CUDA. Now, the matrix does say âfor best performanceâ go to 11.8 for my GPU, so maybe I donât have to be on 11.8 to make it work, it just might not run as fast?
And the onnx piece complicates things because their site (in the error log) only gives options for CUDA 11.8 with cuDNN <= 8.x. But the NVIDIA site said I can go 9.x on cuDNN for 11.8 CUDA. So do I just need to back cuDNN down to 8.x to get onnx to work? (so much going on!)
https://docs.nvidia.com/deeplearning/cudnn/latest/reference/support-matrix.html
UserWarning: Specified provider âOpenVINOExecutionProviderâ is not in available provider names.Available providers: âTensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProviderâ
UserWarning: Specified provider âCoreMLExecutionProviderâ is not in available provider names.Available providers: âTensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProviderâ
2024-12-09 16:17:11.2470127 [E:onnxruntime:Default, provider_bridge_ort.cc:1992 onnxruntime::TryGetProviderInfo_CUDA] D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1637 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 ââ when trying to load âC:\Users\zacra\anaconda3\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dllâ
2024-12-09 16:17:11.2573155 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:965 onnxruntime::python::CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Require cuDNN 9.* and CUDA 12.*, and the latest MSVC runtime. Please install all dependencies as mentioned in the GPU requirements page (NVIDIA - CUDA | onnxruntime), make sure theyâre in the PATH, and that your GPU is supported.
THEN
When I set the env variable as you said, the errors do reduce to the below.
I set it via 2 lines in my python file:
import os
os.environ[âONNXRUNTIME_EXECUTION_PROVIDERSâ] = â[âCUDAExecutionProviderâ]â
UserWarning: Specified provider ââCUDAExecutionProviderââ is not in available provider names.Available providers: âTensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProviderâ
*************** EP Error ***************
EP Error Unknown Provider Type: âCUDAExecutionProviderâ when using [ââCUDAExecutionProviderââ]
Falling back to [âCPUExecutionProviderâ] and retrying.
AND THEN
When I changed the provider line to below, I got those new errors (further below)
os.environ[âONNXRUNTIME_EXECUTION_PROVIDERSâ] = â[CUDAExecutionProvider]â
Result:
2024-12-09 16:44:39.7958310 [E:onnxruntime:Default, provider_bridge_ort.cc:1992 onnxruntime::TryGetProviderInfo_CUDA] D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1637 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 ââ when trying to load âC:\Users\zacra\anaconda3\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dllâ
2024-12-09 16:44:39.8053160 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:965 onnxruntime::python::CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Require cuDNN 9.* and CUDA 12.*, and the latest MSVC runtime. Please install all dependencies as mentioned in the GPU requirements page (onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure theyâre in the PATH, and that your GPU is supported.