I’m trying to follow this example: inferencejs-nextjs-example/app/page.tsx at main · roboflow/inferencejs-nextjs-example · GitHub
but the bounding boxes are a bit off in terms of dimensions and coordinates. I know iOS has quirks in handling video data dimensions. The frontend of roboflow where you can try webcam on your own models seems to work well though. Is there any code I can see for real-time webcam inference, particularly in drawing the boxes?
I’m using CVImage(HTMLVideoElement)
class to send to the .infer()
call. There seems to be a bit of code you need in order to get the bbox
results to draw in the correct location. The example I posted does not use window.devicePixelRatio
, but I’ve seen another example in the forums where it did.