RSTP Local inference running slow/jerky

I am a newbie, so doing my first object detection project (traffic counting). Running inference on a Windows 10 machine with an Nvidia RTX 3060 GPU. I am running Python 3.10.15 and I use Conda to manage the environment. I have built a model and connected to an RSTP feed from a Reolink NVR. I got it to work but it is definitely herky jerky.

I have attempted to change the camera settings to a lower bit rate and resolution. When I view the camera via the Reolink NVR, it runs smoothly. I have a POE camera that is connected via a 1GB wireless network. The video streaming through the NVR is smooth.

I believe I have the 3060 RTX working correctly. It took me a while, but I got CUDA 12 and cuDnn 9 working.

I know there will be some overhead with inference, but wondering if there is something I can do in the Python code to smooth things out. The herky jerky is OK, as long as I get an accurate traffic count, but if I can run smoother, that would be better.

Thanks very much. This is my first programming project, and first with a webcam.

If you run nvidia-smi while it’s going is your GPU being utilized?

Brad, Thanks for your help. When I run this, it is showing no memory usage on the GPU. As I read the documentation, it looks like I need to install Docker. I am not too familiar with that.

I am not sure how I need to modify my code to make that happen, but will read some more.

Hi @Pete_Olesen!

I’d love to help you solving this issue, many thanks for providing all details you already shared.

You mentioned you created conda environment, so I assume you are running directly on host (not in docker container)

Can you confirm if you have installed inference-gpu?

I created very simple script which is loading on my GPU. I have CUDA 12 in my local setup (however I’m running on Linux, not on Windows).
I created my venv like below:

python3.10 -m venv venv
source venv/bin/activate
pip install inference-gpu

Can you share if you see any warnings when running this script?

from typing import Any, Dict

import cv2 as cv
import supervision as sv

from inference.core.interfaces.stream.inference_pipeline import
InferencePipeline


box_annotator = sv.BoundingBoxAnnotator()


def custom_sink(prediction: Dict[str, Any], video_frame) -> None:
     detections = sv.Detections.from_inference(prediction)
     annotated_frame = box_annotator.annotate(
         video_frame.image.copy(), detections=detections
     )
     cv.imshow("", annotated_frame)
     cv.waitKey(1)


inference_pipeline = InferencePipeline.init(
     video_reference=0,
     model_id="yolov8n-640",
     on_prediction=custom_sink,
)

inference_pipeline.start()
inference_pipeline.join()

Please run above script like below:

ONNXRUNTIME_EXECUTION_PROVIDERS='[CUDAExecutionProvider]' python /path/to/this_script.py

Many thanks!
Grzegorz

Grzegorz,

Thanks very much. I will take a look. It may be a few days before I can try this. I work a day job and getting back from Thanksgiving and playing catch up. Thanks again for your help.

1 Like

Grzegorz,

I did some more work in this and I think I made some changes. I did not do the venv approach, but instead I am using my Conda environment. However, I did update inference with pip install --upgrade inference. This did install some addtional things and made one of the error messages go away.

I did run the second block of code and these are the warnings I am getting:

PS C:\Users\Owner> & D:/AnacondaReinstall/envs/roboflow_env3/python.exe c:/Users/Owner/RoboflowHelp1.py
SupervisionWarnings: BoundingBoxAnnotator is deprecated: BoundingBoxAnnotator is deprecated and has been renamed to BoxAnnotator. BoundingBoxAnnotator will be removed in supervision-0.26.0.
UserWarning: Specified provider ‘CUDAExecutionProvider’ is not in available provider names.Available providers: ‘AzureExecutionProvider, CPUExecutionProvider’
UserWarning: Specified provider ‘OpenVINOExecutionProvider’ is not in available provider names.Available providers: ‘AzureExecutionProvider, CPUExecutionProvider’
UserWarning: Specified provider ‘CoreMLExecutionProvider’ is not in available provider names.Available providers: ‘AzureExecutionProvider, CPUExecutionProvider’

It looks like the CUDA execution provider is not available. Not sure how I can fix this.

I asked Chat GPT and it suggested I run nvidia-smi. It looks like the GPU is recognized.

(roboflow_env3) C:\Users\Owner>nvidia-smi
Wed Dec 4 21:15:22 2024
±----------------------------------------------------------------------------------------+
| NVIDIA-SMI 561.17 Driver Version: 561.17 CUDA Version: 12.6 |
|-----------------------------------------±-----------------------±---------------------+
| GPU Name Driver-Model | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 3060 WDDM | 00000000:22:00.0 On | N/A |
| 0% 59C P8 15W / 170W | 1685MiB / 12288MiB | 1% Default |
| | | N/A |
±----------------------------------------±-----------------------±---------------------+

±----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 1260 C+G …\Docker\frontend\Docker Desktop.exe N/A |
| 0 N/A N/A 2120 C+G …siveControlPanel\SystemSettings.exe N/A |
| 0 N/A N/A 3836 C+G C:\Windows\explorer.exe N/A |
| 0 N/A N/A 5200 C+G …on\131.0.2903.70\msedgewebview2.exe N/A |
| 0 N/A N/A 5788 C+G …les\Microsoft OneDrive\OneDrive.exe N/A |
| 0 N/A N/A 6288 C+G …1.0_x64__8wekyb3d8bbwe\Video.UI.exe N/A |
| 0 N/A N/A 6892 C+G …Search_cw5n1h2txyewy\SearchApp.exe N/A |
| 0 N/A N/A 7444 C+G …oogle\Chrome\Application\chrome.exe N/A |
| 0 N/A N/A 8416 C+G …crosoft\Edge\Application\msedge.exe N/A |
| 0 N/A N/A 8432 C+G D:\Microsoft VS Code\Code.exe N/A |
| 0 N/A N/A 8664 C+G …Search_cw5n1h2txyewy\SearchApp.exe N/A |
| 0 N/A N/A 10184 C+G …CBS_cw5n1h2txyewy\TextInputHost.exe N/A |
| 0 N/A N/A 11764 C+G …on\131.0.2903.70\msedgewebview2.exe N/A |
| 0 N/A N/A 12140 C+G …on\HEX\Creative Cloud UI Helper.exe N/A |
| 0 N/A N/A 13548 C+G …ejd91yc\AdobeNotificationClient.exe N/A |
| 0 N/A N/A 14108 C+G …5n1h2txyewy\ShellExperienceHost.exe N/A |
| 0 N/A N/A 15408 C+G …on\131.0.2903.70\msedgewebview2.exe N/A |
| 0 N/A N/A 15992 C+G …oogle\Chrome\Application\chrome.exe N/A |
| 0 N/A N/A 19196 C+G …crosoft\Edge\Application\msedge.exe N/A |
±----------------------------------------------------------------------------------------+

Anway, It seems you are on the right track. I was unsure about the ONYXRUNTIME_EXECUTION_PROVIDERS… script. Can include this line in the other PY files instead of running separately?

Thanks for all of your assistance. I think we are close, just haven’t found the right answer yet.

Hi @Pete_Olesen ,

Can you share result of pip freeze and uname -a?
Can you also install nvidia-ml-py in your environment, and share results of below:

python -c "from pynvml import nvmlInit, nvmlDeviceGetHandleByIndex, nvmlDeviceGetName;handle = nvmlDeviceGetHandleByIndex(0);print(nvmlDeviceGetName(handle))"

This will help us confirm if we are running inference at the right version, and hopefully that CUDA can be loaded from your environment.

Thanks,
Grzegorz

Thanks. I ran the pip install from the conda prompt. Here are the results. Thanks for your diligence.

(base) C:\Users\Owner>conda activate roboflow_env3

(roboflow_env3) C:\Users\Owner>pip install nvidea-ml-py
ERROR: Could not find a version that satisfies the requirement nvidea-ml-py (from versions: none)
ERROR: No matching distribution found for nvidea-ml-py

(roboflow_env3) C:\Users\Owner>pip install nvidia-ml-py
Collecting nvidia-ml-py
Downloading nvidia_ml_py-12.560.30-py3-none-any.whl.metadata (8.6 kB)
Downloading nvidia_ml_py-12.560.30-py3-none-any.whl (40 kB)
Installing collected packages: nvidia-ml-py
Successfully installed nvidia-ml-py-12.560.30

(roboflow_env3) C:\Users\Owner>python -c “from pynvml import nvmlInit, nvmlDeviceGetHandleByIndex, nvmlDeviceGetName;handle = nvmlDeviceGetHandleByIndex(0);print(nvmlDeviceGetName(handle))”
Traceback (most recent call last):
File “”, line 1, in
File “D:\AnacondaReinstall\envs\roboflow_env3\lib\site-packages\pynvml.py”, line 2435, in nvmlDeviceGetHandleByIndex
fn = _nvmlGetFunctionPointer(“nvmlDeviceGetHandleByIndex_v2”)
File “D:\AnacondaReinstall\envs\roboflow_env3\lib\site-packages\pynvml.py”, line 994, in _nvmlGetFunctionPointer
raise NVMLError(NVML_ERROR_UNINITIALIZED)
pynvml.NVMLError_Uninitialized: Uninitialized

(roboflow_env3) C:\Users\Owner>

Hi @Pete_Olesen ,

Thanks for running the script. Please accept my apologies, I forgot to include call to nvmlInit(), can you run below corrected script?

python -c "from pynvml import nvmlInit, nvmlDeviceGetHandleByIndex, nvmlDeviceGetName;nvmlInit();handle = nvmlDeviceGetHandleByIndex(0);print(nvmlDeviceGetName(handle))"

Thanks,
Grzegorz

Gregorz,

No apologies needed! Here is what it returned:

(base) C:\Users\Owner>conda activate roboflow_env3

(roboflow_env3) C:\Users\Owner>python -c “from pynvml import nvmlInit, nvmlDeviceGetHandleByIndex, nvmlDeviceGetName;nvmlInit();handle = nvmlDeviceGetHandleByIndex(0);print(nvmlDeviceGetName(handle))”
NVIDIA GeForce RTX 3060

(roboflow_env3) C:\Users\Owner>

I am thinking there may be some conflict with some of the versions of CUDA and cdnn ? that I have installed. I did some more Q&A with Chat GPT and it suggested I install Pytorch. I ran a script after I installed it to see if Pytorch could see the GPU and it now could, but that was only after I installed something related to CUDA 12.4. I also have CUDA 12.6 installed on this machine.

Below is what I can see from conda list for this environment. It is confusing to me the relationship between what you install at the machine level in Windows via downloads from Nvidia and what is installed in each environment.

Thanks for all of your help.

(roboflow_env3) C:\Users\Owner>conda list

packages in environment at D:\AnacondaReinstall\envs\roboflow_env3:

Name Version Build Channel

aiofiles 24.1.0 pypi_0 pypi
aiohappyeyeballs 2.4.3 pypi_0 pypi
aiohttp 3.10.11 pypi_0 pypi
aiohttp-retry 2.8.3 pypi_0 pypi
aioice 0.9.0 pypi_0 pypi
aiortc 1.9.0 pypi_0 pypi
aiosignal 1.3.1 pypi_0 pypi
aiosqlite 0.20.0 pypi_0 pypi
annotated-types 0.7.0 pypi_0 pypi
anthropic 0.34.2 pypi_0 pypi
anyio 4.6.2.post1 pypi_0 pypi
apscheduler 3.10.4 pypi_0 pypi
async-timeout 5.0.1 pypi_0 pypi
asyncua 1.1.5 pypi_0 pypi
attrs 24.2.0 pypi_0 pypi
av 12.3.0 pypi_0 pypi
backoff 2.2.1 pypi_0 pypi
blas 1.0 mkl
boto3 1.35.60 pypi_0 pypi
botocore 1.35.68 pypi_0 pypi
brotli-python 1.0.9 py310hd77b12b_8
bzip2 1.0.8 h2bbff1b_6
ca-certificates 2024.11.26 haa95532_0
certifi 2024.8.30 py310haa95532_0
cffi 1.17.1 pypi_0 pypi
charset-normalizer 3.4.0 pypi_0 pypi
click 8.1.7 pypi_0 pypi
colorama 0.4.6 pypi_0 pypi
coloredlogs 15.0.1 pypi_0 pypi
commonmark 0.9.1 pypi_0 pypi
contourpy 1.3.1 pypi_0 pypi
cryptography 43.0.3 pypi_0 pypi
cuda-cccl 12.6.77 0 nvidia
cuda-cccl_win-64 12.6.77 0 nvidia
cuda-cudart 12.4.127 0 nvidia
cuda-cudart-dev 12.4.127 0 nvidia
cuda-cupti 12.4.127 0 nvidia
cuda-libraries 12.4.1 0 nvidia
cuda-libraries-dev 12.4.1 0 nvidia
cuda-nvrtc 12.4.127 0 nvidia
cuda-nvrtc-dev 12.4.127 0 nvidia
cuda-nvtx 12.4.127 0 nvidia
cuda-opencl 12.6.77 0 nvidia
cuda-opencl-dev 12.6.77 0 nvidia
cuda-profiler-api 12.6.77 0 nvidia
cuda-runtime 12.4.1 0 nvidia
cuda-version 12.6 3 nvidia
cudnn 9.3.0.75 cuda12.6 nvidia
cycler 0.12.1 pypi_0 pypi
cython 3.0.11 pypi_0 pypi
dataclasses-json 0.6.7 pypi_0 pypi
defusedxml 0.7.1 pypi_0 pypi
distro 1.9.0 pypi_0 pypi
dnspython 2.7.0 pypi_0 pypi
docker 7.1.0 pypi_0 pypi
exceptiongroup 1.2.2 pypi_0 pypi
fastapi 0.110.3 pypi_0 pypi
filelock 3.16.1 pypi_0 pypi
flatbuffers 24.3.25 pypi_0 pypi
fonttools 4.55.0 pypi_0 pypi
freetype 2.12.1 ha860e81_0
frozenlist 1.5.0 pypi_0 pypi
fsspec 2024.10.0 pypi_0 pypi
giflib 5.2.2 h7edc060_0
gmpy2 2.1.2 py310h7f96b67_0
google-crc32c 1.6.0 pypi_0 pypi
gputil 1.4.0 pypi_0 pypi
h11 0.14.0 pypi_0 pypi
httpcore 1.0.7 pypi_0 pypi
httpx 0.25.2 pypi_0 pypi
huggingface-hub 0.26.2 pypi_0 pypi
humanfriendly 10.0 pypi_0 pypi
idna 3.10 pypi_0 pypi
ifaddr 0.2.0 pypi_0 pypi
imageio 2.36.0 pypi_0 pypi
inference 0.29.1 pypi_0 pypi
inference-cli 0.29.0 pypi_0 pypi
inference-gpu 0.28.0 pypi_0 pypi
iniconfig 2.0.0 pypi_0 pypi
intel-openmp 2023.1.0 h59b6b97_46320
jinja2 3.1.4 py310haa95532_1
jiter 0.7.1 pypi_0 pypi
jmespath 1.0.1 pypi_0 pypi
jpeg 9e h827c3e9_3
kiwisolver 1.4.7 pypi_0 pypi
lazy-loader 0.4 pypi_0 pypi
lcms2 2.12 h83e58a3_0
lerc 3.0 hd77b12b_0
libcublas 12.4.5.8 0 nvidia
libcublas-dev 12.4.5.8 0 nvidia
libcufft 11.2.1.3 0 nvidia
libcufft-dev 11.2.1.3 0 nvidia
libcurand 10.3.7.77 0 nvidia
libcurand-dev 10.3.7.77 0 nvidia
libcusolver 11.6.1.9 0 nvidia
libcusolver-dev 11.6.1.9 0 nvidia
libcusparse 12.3.1.170 0 nvidia
libcusparse-dev 12.3.1.170 0 nvidia
libdeflate 1.17 h2bbff1b_1
libffi 3.4.4 hd77b12b_1
libjpeg-turbo 2.0.0 h196d8e1_0
libnpp 12.2.5.30 0 nvidia
libnpp-dev 12.2.5.30 0 nvidia
libnvfatbin 12.6.77 0 nvidia
libnvfatbin-dev 12.6.77 0 nvidia
libnvjitlink 12.4.127 0 nvidia
libnvjitlink-dev 12.4.127 0 nvidia
libnvjpeg 12.3.1.117 0 nvidia
libnvjpeg-dev 12.3.1.117 0 nvidia
libpng 1.6.39 h8cc25b3_0
libtiff 4.5.1 hd77b12b_0
libuv 1.48.0 h827c3e9_0
libwebp 1.3.2 hbc33d0d_0
libwebp-base 1.3.2 h3d04722_1
lz4-c 1.9.4 h2bbff1b_1
markupsafe 2.1.3 py310h2bbff1b_0
marshmallow 3.23.1 pypi_0 pypi
matplotlib 3.9.2 pypi_0 pypi
mkl 2023.1.0 h6b88ed4_46358
mkl-service 2.4.0 py310h2bbff1b_1
mkl_fft 1.3.11 py310h827c3e9_0
mkl_random 1.2.8 py310hc64d2fc_0
mpc 1.1.0 h7edee0f_1
mpfr 4.0.2 h62dcd97_1
mpir 3.0.0 hec2e145_1
mpmath 1.3.0 py310haa95532_0
multidict 6.1.0 pypi_0 pypi
mypy-extensions 1.0.0 pypi_0 pypi
networkx 3.4.2 pypi_0 pypi
numpy 1.26.4 pypi_0 pypi
numpy-base 2.0.1 py310h65a83cf_1
nvidia-ml-py 12.560.30 pypi_0 pypi
onnxruntime 1.19.2 pypi_0 pypi
onnxruntime-gpu 1.19.2 pypi_0 pypi
openai 1.55.0 pypi_0 pypi
opencv-python 4.10.0.84 pypi_0 pypi
opencv-python-headless 4.10.0.84 pypi_0 pypi
openjpeg 2.5.2 hae555c5_0
openssl 3.0.15 h827c3e9_0
packaging 24.2 pypi_0 pypi
pandas 2.2.3 pypi_0 pypi
piexif 1.1.3 pypi_0 pypi
pillow 10.4.0 pypi_0 pypi
pip 24.2 py310haa95532_0
pluggy 1.5.0 pypi_0 pypi
prometheus-client 0.21.0 pypi_0 pypi
prometheus-fastapi-instrumentator 6.0.0 pypi_0 pypi
propcache 0.2.0 pypi_0 pypi
protobuf 5.28.3 pypi_0 pypi
py-cpuinfo 9.0.0 pypi_0 pypi
pybase64 1.0.2 pypi_0 pypi
pycparser 2.22 pypi_0 pypi
pydantic 2.10.1 pypi_0 pypi
pydantic-core 2.27.1 pypi_0 pypi
pydantic-settings 2.6.1 pypi_0 pypi
pydot 2.0.0 pypi_0 pypi
pyee 12.1.1 pypi_0 pypi
pygments 2.18.0 pypi_0 pypi
pyjwt 2.10.1 pypi_0 pypi
pylibsrtp 0.10.0 pypi_0 pypi
pyopenssl 24.2.1 pypi_0 pypi
pyparsing 3.2.0 pypi_0 pypi
pyreadline3 3.5.4 pypi_0 pypi
pysocks 1.7.1 py310haa95532_0
pytest 8.3.3 pypi_0 pypi
python 3.10.15 h4607a30_1
python-dateutil 2.9.0.post0 pypi_0 pypi
python-dotenv 1.0.1 pypi_0 pypi
pytorch 2.5.1 py3.10_cuda12.4_cudnn9_0 pytorch
pytorch-cuda 12.4 h3fd98bf_7 pytorch
pytorch-mutex 1.0 cuda pytorch
pytz 2024.2 pypi_0 pypi
pywin32 308 pypi_0 pypi
pyyaml 6.0.2 py310h827c3e9_0
redis 5.0.8 pypi_0 pypi
requests 2.32.3 py310haa95532_1
requests-file 2.1.0 pypi_0 pypi
requests-toolbelt 1.0.0 pypi_0 pypi
rich 13.0.1 pypi_0 pypi
s3transfer 0.10.4 pypi_0 pypi
scikit-image 0.24.0 pypi_0 pypi
scipy 1.14.1 pypi_0 pypi
setuptools 75.1.0 py310haa95532_0
shapely 2.0.6 pypi_0 pypi
shellingham 1.5.4 pypi_0 pypi
six 1.16.0 pypi_0 pypi
slack-sdk 3.33.4 pypi_0 pypi
sniffio 1.3.1 pypi_0 pypi
sortedcontainers 2.4.0 pypi_0 pypi
sqlite 3.45.3 h2bbff1b_0
starlette 0.37.2 pypi_0 pypi
structlog 24.4.0 pypi_0 pypi
supervision 0.22.0 pypi_0 pypi
sympy 1.13.3 pypi_0 pypi
tbb 2021.8.0 h59b6b97_0
tifffile 2024.9.20 pypi_0 pypi
tk 8.6.14 h0416ee5_0
tldextract 5.1.3 pypi_0 pypi
tokenizers 0.20.3 pypi_0 pypi
tomli 2.1.0 pypi_0 pypi
torchaudio 2.5.1 pypi_0 pypi
torchvision 0.20.1 pypi_0 pypi
tqdm 4.67.0 pypi_0 pypi
twilio 9.3.7 pypi_0 pypi
typer 0.12.5 pypi_0 pypi
typing-extensions 4.12.2 pypi_0 pypi
typing-inspect 0.9.0 pypi_0 pypi
typing_extensions 4.11.0 py310haa95532_0
tzdata 2024.2 pypi_0 pypi
tzlocal 5.2 pypi_0 pypi
urllib3 2.2.3 py310haa95532_0
vc 14.40 h2eaa2aa_1
vs2015_runtime 14.40.33807 h98bb1dd_1
wheel 0.44.0 py310haa95532_0
win_inet_pton 1.1.0 py310haa95532_0
xz 5.4.6 h8cc25b3_1
yaml 0.2.5 he774522_0
yarl 1.18.0 pypi_0 pypi
zlib 1.2.13 h8cc25b3_1
zstd 1.5.6 h8880b57_0
zxing-cpp 2.2.0 pypi_0 pypi

(roboflow_env3) C:\Users\Owner>

Hi Pete,

It seems you have both inference and inference-gpu installed, can you try uninstalling inference (and maybe updating inference-gpu). I also see you have onnxruntime-gpu as well as onnxruntime - I only have onnxruntime-gpu in my venv.

For your reference here are my dependencies that I have after building fresh venv with inference-gpu:

aiofiles==24.1.0
aiohappyeyeballs==2.4.4
aiohttp==3.10.11
aiohttp-retry==2.8.3
aioice==0.9.0
aiortc==1.9.0
aiosignal==1.3.1
aiosqlite==0.20.0
annotated-types==0.7.0
anthropic==0.34.2
anyio==4.7.0
APScheduler==3.11.0
asyncua==1.1.5
attrs==24.2.0
av==12.3.0
backoff==2.2.1
boto3==1.35.60
botocore==1.35.76
certifi==2024.8.30
cffi==1.17.1
charset-normalizer==3.4.0
click==8.1.7
coloredlogs==15.0.1
commonmark==0.9.1
contourpy==1.3.1
cryptography==44.0.0
cycler==0.12.1
Cython==3.0.11
dataclasses-json==0.6.7
defusedxml==0.7.1
distro==1.9.0
dnspython==2.7.0
docker==7.1.0
fastapi==0.110.3
filelock==3.16.1
flatbuffers==24.3.25
fonttools==4.55.2
frozenlist==1.5.0
fsspec==2024.10.0
google-crc32c==1.6.0
h11==0.14.0
httpcore==1.0.7
httpx==0.25.2
huggingface-hub==0.26.5
humanfriendly==10.0
idna==3.10
ifaddr==0.2.0
imageio==2.36.1
inference-gpu==0.29.2
iniconfig==2.0.0
jiter==0.8.0
jmespath==1.0.1
kiwisolver==1.4.7
lazy_loader==0.4
marshmallow==3.23.1
matplotlib==3.9.3
mpmath==1.3.0
multidict==6.1.0
mypy-extensions==1.0.0
networkx==3.4.2
numpy==1.26.4
nvidia-ml-py==12.560.30
onnxruntime-gpu==1.19.2
openai==1.57.0
opencv-python==4.10.0.84
opencv-python-headless==4.10.0.84
packaging==24.2
pandas==2.2.3
pillow==10.4.0
pluggy==1.5.0
prometheus-fastapi-instrumentator==6.0.0
prometheus_client==0.21.1
propcache==0.2.1
protobuf==5.29.1
py-cpuinfo==9.0.0
pybase64==1.0.2
pycparser==2.22
pydantic==2.10.3
pydantic-settings==2.6.1
pydantic_core==2.27.1
pydot==2.0.0
pyee==12.1.1
Pygments==2.18.0
PyJWT==2.10.1
pylibsrtp==0.10.0
pyOpenSSL==24.3.0
pyparsing==3.2.0
pytest==8.3.4
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
pytz==2024.2
PyYAML==6.0.2
redis==5.0.8
requests==2.32.3
requests-file==2.1.0
requests-toolbelt==1.0.0
rich==13.0.1
s3transfer==0.10.4
scikit-image==0.24.0
scipy==1.14.1
setuptools==75.6.0
shapely==2.0.6
shellingham==1.5.4
six==1.17.0
slack_sdk==3.33.5
sniffio==1.3.1
sortedcontainers==2.4.0
starlette==0.37.2
structlog==24.4.0
supervision==0.22.0
sympy==1.13.3
tifffile==2024.9.20
tldextract==5.1.3
tokenizers==0.20.3
tqdm==4.67.1
twilio==9.3.8
typer==0.12.5
typing-inspect==0.9.0
typing_extensions==4.12.2
tzdata==2024.2
tzlocal==5.2
urllib3==2.2.3
wheel==0.45.0
yarl==1.18.3
zxing-cpp==2.2.0

Maybe it would be easier to build fresh conda environment and test there?

Hope this helps,
Grzegorz

Grzegorz,

I took your advice and created a new python 3.10 environment using conda. I am getting a similar package list as your. Please see below:

(roboflow) C:\Users\Owner>conda list

packages in environment at D:\AnacondaReinstall\envs\roboflow:

Name Version Build Channel

aiofiles 24.1.0 pypi_0 pypi
aiohappyeyeballs 2.4.4 pypi_0 pypi
aiohttp 3.10.11 pypi_0 pypi
aiohttp-retry 2.8.3 pypi_0 pypi
aioice 0.9.0 pypi_0 pypi
aiortc 1.9.0 pypi_0 pypi
aiosignal 1.3.1 pypi_0 pypi
aiosqlite 0.20.0 pypi_0 pypi
annotated-types 0.7.0 pypi_0 pypi
anthropic 0.34.2 pypi_0 pypi
anyio 4.7.0 pypi_0 pypi
apscheduler 3.11.0 pypi_0 pypi
async-timeout 5.0.1 pypi_0 pypi
asyncua 1.1.5 pypi_0 pypi
attrs 24.2.0 pypi_0 pypi
av 12.3.0 pypi_0 pypi
backoff 2.2.1 pypi_0 pypi
boto3 1.35.60 pypi_0 pypi
botocore 1.35.76 pypi_0 pypi
bzip2 1.0.8 h2bbff1b_6
ca-certificates 2024.11.26 haa95532_0
certifi 2024.8.30 pypi_0 pypi
cffi 1.17.1 pypi_0 pypi
charset-normalizer 3.4.0 pypi_0 pypi
click 8.1.7 pypi_0 pypi
colorama 0.4.6 pypi_0 pypi
coloredlogs 15.0.1 pypi_0 pypi
commonmark 0.9.1 pypi_0 pypi
contourpy 1.3.1 pypi_0 pypi
cryptography 44.0.0 pypi_0 pypi
cycler 0.12.1 pypi_0 pypi
cython 3.0.11 pypi_0 pypi
dataclasses-json 0.6.7 pypi_0 pypi
defusedxml 0.7.1 pypi_0 pypi
distro 1.9.0 pypi_0 pypi
dnspython 2.7.0 pypi_0 pypi
docker 7.1.0 pypi_0 pypi
exceptiongroup 1.2.2 pypi_0 pypi
fastapi 0.110.3 pypi_0 pypi
filelock 3.16.1 pypi_0 pypi
flatbuffers 24.3.25 pypi_0 pypi
fonttools 4.55.2 pypi_0 pypi
frozenlist 1.5.0 pypi_0 pypi
fsspec 2024.10.0 pypi_0 pypi
google-crc32c 1.6.0 pypi_0 pypi
h11 0.14.0 pypi_0 pypi
httpcore 1.0.7 pypi_0 pypi
httpx 0.25.2 pypi_0 pypi
huggingface-hub 0.26.5 pypi_0 pypi
humanfriendly 10.0 pypi_0 pypi
idna 3.10 pypi_0 pypi
ifaddr 0.2.0 pypi_0 pypi
imageio 2.36.1 pypi_0 pypi
inference-gpu 0.29.2 pypi_0 pypi
iniconfig 2.0.0 pypi_0 pypi
jiter 0.8.0 pypi_0 pypi
jmespath 1.0.1 pypi_0 pypi
kiwisolver 1.4.7 pypi_0 pypi
lazy-loader 0.4 pypi_0 pypi
libffi 3.4.4 hd77b12b_1
marshmallow 3.23.1 pypi_0 pypi
matplotlib 3.9.3 pypi_0 pypi
mpmath 1.3.0 pypi_0 pypi
multidict 6.1.0 pypi_0 pypi
mypy-extensions 1.0.0 pypi_0 pypi
networkx 3.4.2 pypi_0 pypi
numpy 1.26.4 pypi_0 pypi
nvidia-ml-py 12.560.30 pypi_0 pypi
onnxruntime-gpu 1.19.0 pypi_0 pypi
openai 1.57.0 pypi_0 pypi
opencv-python 4.10.0.84 pypi_0 pypi
opencv-python-headless 4.10.0.84 pypi_0 pypi
openssl 3.0.15 h827c3e9_0
packaging 24.2 pypi_0 pypi
pandas 2.2.3 pypi_0 pypi
pillow 10.4.0 pypi_0 pypi
pip 24.2 py310haa95532_0
pluggy 1.5.0 pypi_0 pypi
prometheus-client 0.21.1 pypi_0 pypi
prometheus-fastapi-instrumentator 6.0.0 pypi_0 pypi
propcache 0.2.1 pypi_0 pypi
protobuf 5.29.1 pypi_0 pypi
py-cpuinfo 9.0.0 pypi_0 pypi
pybase64 1.0.2 pypi_0 pypi
pycparser 2.22 pypi_0 pypi
pydantic 2.10.3 pypi_0 pypi
pydantic-core 2.27.1 pypi_0 pypi
pydantic-settings 2.6.1 pypi_0 pypi
pydot 2.0.0 pypi_0 pypi
pyee 12.1.1 pypi_0 pypi
pygments 2.18.0 pypi_0 pypi
pyjwt 2.10.1 pypi_0 pypi
pylibsrtp 0.10.0 pypi_0 pypi
pyopenssl 24.3.0 pypi_0 pypi
pyparsing 3.2.0 pypi_0 pypi
pyreadline3 3.5.4 pypi_0 pypi
pytest 8.3.4 pypi_0 pypi
python 3.10.15 h4607a30_1
python-dateutil 2.9.0.post0 pypi_0 pypi
python-dotenv 1.0.1 pypi_0 pypi
pytz 2024.2 pypi_0 pypi
pywin32 308 pypi_0 pypi
pyyaml 6.0.2 pypi_0 pypi
redis 5.0.8 pypi_0 pypi
requests 2.32.3 pypi_0 pypi
requests-file 2.1.0 pypi_0 pypi
requests-toolbelt 1.0.0 pypi_0 pypi
rich 13.0.1 pypi_0 pypi
s3transfer 0.10.4 pypi_0 pypi
scikit-image 0.24.0 pypi_0 pypi
scipy 1.14.1 pypi_0 pypi
setuptools 75.1.0 py310haa95532_0
shapely 2.0.6 pypi_0 pypi
shellingham 1.5.4 pypi_0 pypi
six 1.17.0 pypi_0 pypi
slack-sdk 3.33.5 pypi_0 pypi
sniffio 1.3.1 pypi_0 pypi
sortedcontainers 2.4.0 pypi_0 pypi
sqlite 3.45.3 h2bbff1b_0
starlette 0.37.2 pypi_0 pypi
structlog 24.4.0 pypi_0 pypi
supervision 0.22.0 pypi_0 pypi
sympy 1.13.3 pypi_0 pypi
tifffile 2024.9.20 pypi_0 pypi
tk 8.6.14 h0416ee5_0
tldextract 5.1.3 pypi_0 pypi
tokenizers 0.20.3 pypi_0 pypi
tomli 2.2.1 pypi_0 pypi
tqdm 4.67.1 pypi_0 pypi
twilio 9.3.8 pypi_0 pypi
typer 0.12.5 pypi_0 pypi
typing-extensions 4.12.2 pypi_0 pypi
typing-inspect 0.9.0 pypi_0 pypi
tzdata 2024.2 pypi_0 pypi
tzlocal 5.2 pypi_0 pypi
urllib3 2.2.3 pypi_0 pypi
vc 14.40 haa95532_2
vs2015_runtime 14.42.34433 h9531ae6_2
wheel 0.44.0 py310haa95532_0
xz 5.4.6 h8cc25b3_1
yarl 1.18.3 pypi_0 pypi
zlib 1.2.13 h8cc25b3_1
zxing-cpp 2.2.0 pypi_0 pypi

(roboflow) C:\Users\Owner>

However, rerunning the VS code getting some errors still. It seems that it detects CUDA as a provider, but still having difficulties. Here are the errors I am seeing. I am running CUDA 12.6 and cuDnn 9.6. Based on the Onyx requirements, this should work. Very perplexing.

PS C:\Users\Owner> & D:/AnacondaReinstall/envs/roboflow/python.exe “c:/Users/Owner/Roboflow Test 20241123 RSTP.py”
UserWarning: Specified provider ‘OpenVINOExecutionProvider’ is not in available provider names.Available providers: ‘TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider’
UserWarning: Specified provider ‘CoreMLExecutionProvider’ is not in available provider names.Available providers: ‘TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider’
2024-12-07 21:11:52.7607413 [E:onnxruntime:Default, provider_bridge_ort.cc:1992 onnxruntime::TryGetProviderInfo_CUDA] D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1637 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 “” when trying to load “D:\AnacondaReinstall\envs\roboflow\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll”

2024-12-07 21:11:52.7752091 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:965 onnxruntime::python::CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Require cuDNN 9.* and CUDA 12.*, and
the latest MSVC runtime. Please install all dependencies as mentioned in the GPU requirements page (NVIDIA - CUDA | onnxruntime), make sure they’re in the PATH, and that your GPU is supported.

@Pete_Olesen do you have the latest Visual C++ runtime installed? Latest supported Visual C++ Redistributable downloads | Microsoft Learn

See this thread for more info: Cant get GPU to work with ONNX Runtime 1.19 Cuda 12.6 CuDNN 9 RTXA4000 · Issue #21825 · microsoft/onnxruntime · GitHub

Maybe try pip install msvc-runtime==14.40.33807 in your environment if that doesn’t work?

Hi @Pete_Olesen ,

In addition to what @peter_roboflow suggested, I guess you already have your PATH updated with paths to CUDA_INSTALL_DIR/bin, CUDA_INSTALL_DIR/libnvvp and cuDNN_INSTALL_DIR/bin?

We can also try to capture more verbose logs from onnx by setting ORT_LOG_LEVEL=VERBOSE and running your script

Thanks,
Grzegorz

I am having a similar issue. I’ve worked through most of the steps on this page (and about 100 other sites :slight_smile: ). The only difference right now is I’m not in a venv for this. I was, but then certain global things had to be updated anyways so I just stayed global as I worked through this. My current error is below. I have had other errors come and go but this one will not go away. (GPU is NVIDIA GeForce GTX 1660 Ti). I’m not wanting to derail this current solve for @Pete_Olesen. Just wanted to chime in that I’m following the discussion with a similar issue.

UserWarning: Specified provider ‘OpenVINOExecutionProvider’ is not in available provider names.Available providers: ‘TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider’
UserWarning: Specified provider ‘CoreMLExecutionProvider’ is not in available provider names.Available providers: ‘TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider’

These warnings are a bit misleading, but they are not actually a problem for you

OpenVINO is only available if you install it (it’s an optimized Intel runtime) and same with CoreML (an optimized Apple silicon runtime)

So both of those providers not being available is ok – it’ll just select CUDAExecutionProvider which is the one using your GPU

Hi @Automatez ,

To add on top of @peter_roboflow explanation, you can narrow down list of available ONNX execution providers by setting ONNXRUNTIME_EXECUTION_PROVIDERS='[CUDAExecutionProvider]' as mentioned in one of replies above. Setting this env variable should result in those warnings to go away.

Can you share whole log?

Hope this helps,
Grzegorz

Update on my end - following some guidance in one of those links, I looked at the NVIDIA matrix for CUDA version use. It said on my “older” GPU generation, I should be using CUDA 11.8. So I uninstalled 12.6 and loaded that up and now I have the below errors.

These errors seem to indicate two things. First, that maybe I need to be on a 12.x CUDA. Now, the matrix does say “for best performance” go to 11.8 for my GPU, so maybe I don’t have to be on 11.8 to make it work, it just might not run as fast?

And the onnx piece complicates things because their site (in the error log) only gives options for CUDA 11.8 with cuDNN <= 8.x. But the NVIDIA site said I can go 9.x on cuDNN for 11.8 CUDA. So do I just need to back cuDNN down to 8.x to get onnx to work? (so much going on!)

https://docs.nvidia.com/deeplearning/cudnn/latest/reference/support-matrix.html

UserWarning: Specified provider ‘OpenVINOExecutionProvider’ is not in available provider names.Available providers: ‘TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider’
UserWarning: Specified provider ‘CoreMLExecutionProvider’ is not in available provider names.Available providers: ‘TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider’
2024-12-09 16:17:11.2470127 [E:onnxruntime:Default, provider_bridge_ort.cc:1992 onnxruntime::TryGetProviderInfo_CUDA] D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1637 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 “” when trying to load “C:\Users\zacra\anaconda3\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll”

2024-12-09 16:17:11.2573155 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:965 onnxruntime::python::CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Require cuDNN 9.* and CUDA 12.*, and the latest MSVC runtime. Please install all dependencies as mentioned in the GPU requirements page (NVIDIA - CUDA | onnxruntime), make sure they’re in the PATH, and that your GPU is supported.

THEN
When I set the env variable as you said, the errors do reduce to the below.

I set it via 2 lines in my python file:
import os
os.environ[“ONNXRUNTIME_EXECUTION_PROVIDERS”] = ‘[“CUDAExecutionProvider”]’

UserWarning: Specified provider ‘“CUDAExecutionProvider”’ is not in available provider names.Available providers: ‘TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider’
*************** EP Error ***************
EP Error Unknown Provider Type: “CUDAExecutionProvider” when using [‘“CUDAExecutionProvider”’]
Falling back to [‘CPUExecutionProvider’] and retrying.


AND THEN

When I changed the provider line to below, I got those new errors (further below)

os.environ[“ONNXRUNTIME_EXECUTION_PROVIDERS”] = ‘[CUDAExecutionProvider]’

Result:
2024-12-09 16:44:39.7958310 [E:onnxruntime:Default, provider_bridge_ort.cc:1992 onnxruntime::TryGetProviderInfo_CUDA] D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1637 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 “” when trying to load “C:\Users\zacra\anaconda3\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll”

2024-12-09 16:44:39.8053160 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:965 onnxruntime::python::CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Require cuDNN 9.* and CUDA 12.*, and the latest MSVC runtime. Please install all dependencies as mentioned in the GPU requirements page (onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they’re in the PATH, and that your GPU is supported.

@Automatez please follow the steps here to reinstall onnxruntime-gpu for CUDA 11 Install ONNX Runtime | onnxruntime

Tried these two lines and got Requirement already satisfied for everything. Rats.

# For Cuda 11.x, please use the following instructions to install from ORT Azure Devops Feed for 1.19.2 or later

pip install flatbuffers numpy packaging protobuf sympy
pip install onnxruntime-gpu --index-url xxxxx://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-11/pypi/simple/

And I was hoping there’d be a common fix with @Pete_Olesen but maybe what I’m trying to run is part of the problem so I dropped my short script below in case that sheds any light on things. (Really appreciate the help here, btw!)

# xxxxx://inference.roboflow.com/quickstart/run_a_model/#install-inference

import os

# Set the ONNX Runtime execution providers

os.environ[“ONNXRUNTIME_EXECUTION_PROVIDERS”] = ‘[CUDAExecutionProvider]’

os.environ[“ORT_LOG_LEVEL”] = “VERBOSE”

# import a utility function for loading Roboflow models

from inference import get_model

# define the image url to use for inference

image = “https://media.roboflow.com/inference/people-walking.jpg

# load a pre-trained yolov8n model

model = get_model(model_id=“yolov8n-640”)

# run inference on our chosen image, image can be a url, a numpy array, a PIL image, etc.

results = model.infer(image)