How to run roboflow models offline in a React app?

The title. I ran Roboflow Inference Server locally on my Mac (the docker image), but I’m unsure how to use the inferencejs package to do inferencing on the local inference server, more specifically in Vite.

I’ve used the deprecated (I think) Roboflowjs lib with the window.roboflow.auth etc but I’m unsure how to use it to make requests to the Roboflow local inference server.

How do I do this on the inferencejs library?

Hey @Sean_C

roboflow.js is not deprecated, but it’s use is slightly unrelated to the inference server since roboflow.js requires a internet connection to Roboflow to download the model and run inferences on-device using local compute.

If you’re looking to run the model locally, you’re on the right track with Roboflow Inference where you can start your own inference server locally. To use the inference server, you can just send requests to a local endpoint (by default, it’s http://localhost:9001/) just as you would to Roboflow’s Hosted Inference API.

See more details here: Predict on an Image Over HTTP - Roboflow Inference

Hey Leo,

Thank you for answering my question. What is difference between roboflow.js and inferencejs then? Is the only way to ping my local inference server instead of Roboflow’s servers to use the REST API and not inferencejs or roboflow.js?

I already set up my project using roboflow.js but I’m unsure how to use my local Roboflow Inference Server. Is it possible to use the inferencejs library on a local endpoint?

Nonetheless, thanks for your help.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.