I am a newbie, so doing my first object detection project (traffic counting). Running inference on a Windows 10 machine with an Nvidia RTX 3060 GPU. I am running Python 3.10.15 and I use Conda to manage the environment. I have built a model and connected to an RSTP feed from a Reolink NVR. I got it to work but it is definitely herky jerky.
I have attempted to change the camera settings to a lower bit rate and resolution. When I view the camera via the Reolink NVR, it runs smoothly. I have a POE camera that is connected via a 1GB wireless network. The video streaming through the NVR is smooth.
I believe I have the 3060 RTX working correctly. It took me a while, but I got CUDA 12 and cuDnn 9 working.
I know there will be some overhead with inference, but wondering if there is something I can do in the Python code to smooth things out. The herky jerky is OK, as long as I get an accurate traffic count, but if I can run smoother, that would be better.
Thanks very much. This is my first programming project, and first with a webcam.
Brad, Thanks for your help. When I run this, it is showing no memory usage on the GPU. As I read the documentation, it looks like I need to install Docker. I am not too familiar with that.
I am not sure how I need to modify my code to make that happen, but will read some more.
I’d love to help you solving this issue, many thanks for providing all details you already shared.
You mentioned you created conda environment, so I assume you are running directly on host (not in docker container)
Can you confirm if you have installed inference-gpu?
I created very simple script which is loading on my GPU. I have CUDA 12 in my local setup (however I’m running on Linux, not on Windows).
I created my venv like below:
Thanks very much. I will take a look. It may be a few days before I can try this. I work a day job and getting back from Thanksgiving and playing catch up. Thanks again for your help.
I did some more work in this and I think I made some changes. I did not do the venv approach, but instead I am using my Conda environment. However, I did update inference with pip install --upgrade inference. This did install some addtional things and made one of the error messages go away.
I did run the second block of code and these are the warnings I am getting:
PS C:\Users\Owner> & D:/AnacondaReinstall/envs/roboflow_env3/python.exe c:/Users/Owner/RoboflowHelp1.py
SupervisionWarnings: BoundingBoxAnnotator is deprecated: BoundingBoxAnnotator is deprecated and has been renamed to BoxAnnotator. BoundingBoxAnnotator will be removed in supervision-0.26.0.
UserWarning: Specified provider ‘CUDAExecutionProvider’ is not in available provider names.Available providers: ‘AzureExecutionProvider, CPUExecutionProvider’
UserWarning: Specified provider ‘OpenVINOExecutionProvider’ is not in available provider names.Available providers: ‘AzureExecutionProvider, CPUExecutionProvider’
UserWarning: Specified provider ‘CoreMLExecutionProvider’ is not in available provider names.Available providers: ‘AzureExecutionProvider, CPUExecutionProvider’
It looks like the CUDA execution provider is not available. Not sure how I can fix this.
I asked Chat GPT and it suggested I run nvidia-smi. It looks like the GPU is recognized.
Anway, It seems you are on the right track. I was unsure about the ONYXRUNTIME_EXECUTION_PROVIDERS… script. Can include this line in the other PY files instead of running separately?
Thanks for all of your assistance. I think we are close, just haven’t found the right answer yet.
(roboflow_env3) C:\Users\Owner>pip install nvidea-ml-py
ERROR: Could not find a version that satisfies the requirement nvidea-ml-py (from versions: none)
ERROR: No matching distribution found for nvidea-ml-py
I am thinking there may be some conflict with some of the versions of CUDA and cdnn ? that I have installed. I did some more Q&A with Chat GPT and it suggested I install Pytorch. I ran a script after I installed it to see if Pytorch could see the GPU and it now could, but that was only after I installed something related to CUDA 12.4. I also have CUDA 12.6 installed on this machine.
Below is what I can see from conda list for this environment. It is confusing to me the relationship between what you install at the machine level in Windows via downloads from Nvidia and what is installed in each environment.
Thanks for all of your help.
(roboflow_env3) C:\Users\Owner>conda list
packages in environment at D:\AnacondaReinstall\envs\roboflow_env3:
It seems you have both inference and inference-gpu installed, can you try uninstalling inference (and maybe updating inference-gpu). I also see you have onnxruntime-gpu as well as onnxruntime - I only have onnxruntime-gpu in my venv.
For your reference here are my dependencies that I have after building fresh venv with inference-gpu:
However, rerunning the VS code getting some errors still. It seems that it detects CUDA as a provider, but still having difficulties. Here are the errors I am seeing. I am running CUDA 12.6 and cuDnn 9.6. Based on the Onyx requirements, this should work. Very perplexing.
PS C:\Users\Owner> & D:/AnacondaReinstall/envs/roboflow/python.exe “c:/Users/Owner/Roboflow Test 20241123 RSTP.py”
UserWarning: Specified provider ‘OpenVINOExecutionProvider’ is not in available provider names.Available providers: ‘TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider’
UserWarning: Specified provider ‘CoreMLExecutionProvider’ is not in available provider names.Available providers: ‘TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider’
2024-12-07 21:11:52.7607413 [E:onnxruntime:Default, provider_bridge_ort.cc:1992 onnxruntime::TryGetProviderInfo_CUDA] D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1637 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 “” when trying to load “D:\AnacondaReinstall\envs\roboflow\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll”
2024-12-07 21:11:52.7752091 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:965 onnxruntime::python::CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Require cuDNN 9.* and CUDA 12.*, and
the latest MSVC runtime. Please install all dependencies as mentioned in the GPU requirements page (NVIDIA - CUDA | onnxruntime), make sure they’re in the PATH, and that your GPU is supported.
In addition to what @peter_roboflow suggested, I guess you already have your PATH updated with paths to CUDA_INSTALL_DIR/bin, CUDA_INSTALL_DIR/libnvvp and cuDNN_INSTALL_DIR/bin?
We can also try to capture more verbose logs from onnx by setting ORT_LOG_LEVEL=VERBOSE and running your script
I am having a similar issue. I’ve worked through most of the steps on this page (and about 100 other sites ). The only difference right now is I’m not in a venv for this. I was, but then certain global things had to be updated anyways so I just stayed global as I worked through this. My current error is below. I have had other errors come and go but this one will not go away. (GPU is NVIDIA GeForce GTX 1660 Ti). I’m not wanting to derail this current solve for @Pete_Olesen. Just wanted to chime in that I’m following the discussion with a similar issue.
UserWarning: Specified provider ‘OpenVINOExecutionProvider’ is not in available provider names.Available providers: ‘TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider’
UserWarning: Specified provider ‘CoreMLExecutionProvider’ is not in available provider names.Available providers: ‘TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider’
To add on top of @peter_roboflow explanation, you can narrow down list of available ONNX execution providers by setting ONNXRUNTIME_EXECUTION_PROVIDERS='[CUDAExecutionProvider]' as mentioned in one of replies above. Setting this env variable should result in those warnings to go away.