Hii there, I’ve been working on an object detection project that identifies honey bees as well as whether they carry pollen on them or not. I have trained a YOLOv8 model using various datasets that are open sourced and deployed it on Roboflow. The issue is that the inference works well on when I upload images but doesn’t work when I try live inference using my webcam. It just keeps saying ‘loading model weights’. I’ve given access to my webcam when the browser(firefox) has requested it.
Here is the public view to my model:
Doesn’t even open up my webcam in this way.
I’ve tried using my webcam from the model section of my project as well.
There, It opens up my webcam but even then there’s no inference happening. 0fps … I’m assuming no frames are being captured at all.
Is it because I’m missing some sort of enterprise plan for my account?
Hi @fahiar10, it isn’t due to a blockage in feature availability, but rather due to the image shape being passed to roboflow.js
– as you see in the error log, the expected shape is [1, 3, 640, 640]
, but it is being passed in as [1, 640, 640, 3]
-
[1, 3, 640, 640]
- refers to: 1 image or video frame, with 3 channels (R, G and B), at a size of 640 width and 640 height
I’ll add this to our current bug report for the team.
In the meantime, if you use a Roboflow Train credit to train a model version, roboflow.js
will work for you.
Another option is to use one of the scripts I have here in this repository to run inference: https://github.com/roboflow/computer-vision-utilities
Here is the video inference script for object detection:
- On a webcam stream: roboflow-computer-vision-utilities/draw_stream.py at main · roboflow/roboflow-computer-vision-utilities · GitHub
- On a video file: roboflow-computer-vision-utilities/draw_vid.py at main · roboflow/roboflow-computer-vision-utilities · GitHub
1 Like