Hii there, Iāve been working on an object detection project that identifies honey bees as well as whether they carry pollen on them or not. I have trained a YOLOv8 model using various datasets that are open sourced and deployed it on Roboflow. The issue is that the inference works well on when I upload images but doesnāt work when I try live inference using my webcam. It just keeps saying āloading model weightsā. Iāve given access to my webcam when the browser(firefox) has requested it.
Here is the public view to my model:
Doesnāt even open up my webcam in this way.
Iāve tried using my webcam from the model section of my project as well.
There, It opens up my webcam but even then thereās no inference happening. 0fps ⦠Iām assuming no frames are being captured at all.
Is it because Iām missing some sort of enterprise plan for my account?
Hi @fahiar10, it isnāt due to a blockage in feature availability, but rather due to the image shape being passed to roboflow.js
ā as you see in the error log, the expected shape is [1, 3, 640, 640]
, but it is being passed in as [1, 640, 640, 3]
-
[1, 3, 640, 640]
- refers to: 1 image or video frame, with 3 channels (R, G and B), at a size of 640 width and 640 height
Iāll add this to our current bug report for the team.
In the meantime, if you use a Roboflow Train credit to train a model version, roboflow.js
will work for you.
Another option is to use one of the scripts I have here in this repository to run inference: https://github.com/roboflow/computer-vision-utilities
Here is the video inference script for object detection:
- On a webcam stream: roboflow-computer-vision-utilities/draw_stream.py at main Ā· roboflow/roboflow-computer-vision-utilities Ā· GitHub
- On a video file: roboflow-computer-vision-utilities/draw_vid.py at main Ā· roboflow/roboflow-computer-vision-utilities Ā· GitHub
1 Like