Update on my end - following some guidance in one of those links, I looked at the NVIDIA matrix for CUDA version use. It said on my “older” GPU generation, I should be using CUDA 11.8. So I uninstalled 12.6 and loaded that up and now I have the below errors.
These errors seem to indicate two things. First, that maybe I need to be on a 12.x CUDA. Now, the matrix does say “for best performance” go to 11.8 for my GPU, so maybe I don’t have to be on 11.8 to make it work, it just might not run as fast?
And the onnx piece complicates things because their site (in the error log) only gives options for CUDA 11.8 with cuDNN <= 8.x. But the NVIDIA site said I can go 9.x on cuDNN for 11.8 CUDA. So do I just need to back cuDNN down to 8.x to get onnx to work? (so much going on!)
https://docs.nvidia.com/deeplearning/cudnn/latest/reference/support-matrix.html
UserWarning: Specified provider ‘OpenVINOExecutionProvider’ is not in available provider names.Available providers: ‘TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider’
UserWarning: Specified provider ‘CoreMLExecutionProvider’ is not in available provider names.Available providers: ‘TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider’
2024-12-09 16:17:11.2470127 [E:onnxruntime:Default, provider_bridge_ort.cc:1992 onnxruntime::TryGetProviderInfo_CUDA] D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1637 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 “” when trying to load “C:\Users\zacra\anaconda3\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll”
2024-12-09 16:17:11.2573155 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:965 onnxruntime::python::CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Require cuDNN 9.* and CUDA 12.*, and the latest MSVC runtime. Please install all dependencies as mentioned in the GPU requirements page (NVIDIA - CUDA | onnxruntime), make sure they’re in the PATH, and that your GPU is supported.
THEN
When I set the env variable as you said, the errors do reduce to the below.
I set it via 2 lines in my python file:
import os
os.environ[“ONNXRUNTIME_EXECUTION_PROVIDERS”] = ‘[“CUDAExecutionProvider”]’
UserWarning: Specified provider ‘“CUDAExecutionProvider”’ is not in available provider names.Available providers: ‘TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider’
*************** EP Error ***************
EP Error Unknown Provider Type: “CUDAExecutionProvider” when using [‘“CUDAExecutionProvider”’]
Falling back to [‘CPUExecutionProvider’] and retrying.
AND THEN
When I changed the provider line to below, I got those new errors (further below)
os.environ[“ONNXRUNTIME_EXECUTION_PROVIDERS”] = ‘[CUDAExecutionProvider]’
Result:
2024-12-09 16:44:39.7958310 [E:onnxruntime:Default, provider_bridge_ort.cc:1992 onnxruntime::TryGetProviderInfo_CUDA] D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1637 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 “” when trying to load “C:\Users\zacra\anaconda3\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll”
2024-12-09 16:44:39.8053160 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:965 onnxruntime::python::CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Require cuDNN 9.* and CUDA 12.*, and the latest MSVC runtime. Please install all dependencies as mentioned in the GPU requirements page (onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they’re in the PATH, and that your GPU is supported.