I am trying to perform inference in the browser using a webcam as the source, but I am struggling to find a working solution. Do you have any ideas or guidelines on how to achieve this
loading a onnx model.
Additionally, I would like to add bounding boxes around detected objects as a subsequent step.
It works with python thank you, but my idea is to do the inference on the browser in local.
localhost:3000/inference.html
and see my webcam with the bouding box , just loadin the onxx or other format using javascript
There are several ways to run inferences both hosted and locally.
The resource Trevor shared in his first answer, roboflow.js, runs your model locally in your browser. That will return your model inference results in a JSON format.