I’m a hobbyist programmer who signed up for Roboflow after seeing some of the really cool demo projects your team put together especially in the video space. I’d like to analyze my own video, but don’t see how this is possible without spending a large sum of money.
In your free tier you only get 1,000 API calls, which would be about 3-5 minutes of video. In the startup plan you only get 10,000.
It’d be great to run these models locally or on a Pi as your tutorials show. But whenever I try to run them it says I need an enterprise license which I assume is crazy expensive.
Is there anyway to do a hobby project like this with Roboflow? Or did I find the wrong product?
It is true that with the free plan you only get 1,000 API calls, but there are solutions.
One solution would be to explore the edge deployment options available under the free plan. Some of these include roboflow.js, NVIDIA Jetson, Luxonis OAK or the iOS SDK. These deployment options are on-device and therefore do not count against your monthly hosted inference quota.
In my experience, if you want direct live webcam inference, you can try checking out the hand-detector webcam demo and use the code as a starting point.
Otherwise, you can consider if contributing to Roboflow, and the higher quotas and limits they provide are something you’d be interested in.
I don’t mind subscribing to the starter plan if needed.
Is there a reason why you wouldn’t want to use roboflow.js? You could stream webcam footage and inference live on it. Here’s a related demo using OBS.