Inferencing with roboflow advice

Hi,

I currently have my own solution that works with tensorflow-cpu running on vm using a flask. I basically export a video frame to a base64 image on my web app and then ping that over to my flask server using ajax which does the inference and returns a base64 png. Unfortunately the process takes to long (about 30 seconds per image) and as i only need to do a small amount of images (about 200) each month it seems a waste of money to purchase a grunty GPU to do this.

Can i achieve this with roboflow?

does anyone have any advice how i can do this please?

Hi - you can definitely achieve this with Roboflow. Our hosted inference API would allow you to receive inference results easily, without the fuss of maintaining your own GPU.

You could use the API with any model you train with Roboflow or upload to the platform.

Thanks Jacob,

Would i just point my endpoint to roboflow instead of my flask service?

Yup! I’ll keep watching this thread, so just hit me up here if you run into any issues.