How to deploy my model on web through inferencejs

I was trying to build a web app through inferencejs that allows a user to upload an image and the model on Roboflow can do the object detection.

Following the documentation: Web Browser | Roboflow Docs

I carefully checked three important parameters: MODEL_NAME, MODEL_VERSION, PUBLIC_KEY but I keep facing the issue that model cant be loaded either using npm or add the script tag reference to my page’s <head> tag. However, all of them failed.

When I use script tag reference, it prompts that inference is not defined

When I use inferencejs, it prompts error as above.

I can process images through inference_sdk on python but I can’t deploy it on web.

Any guidance or alternative methods to deploy the app on the web would be greatly appreciated.

Hi @Anthony!
This sounds like an awesome project!! Couple of questions to consider while determining your deployment method.

  • Do you want browser-side or server-side inference?
  • Is your model public or private?

For example, if you want to run server-side inference with a private model, I suggest implementing inference_sdk, which I have linked the documentation to.

Hi Ford,

Thank you for your suggestion.

I’m looking to run inference on the browser side, and the model I’m using is private. My initial plan was to build a lightweight project and deploy it on Netlify, which offers free hosting for small web apps. I spent a day working with inferencejs, but unfortunately, I kept running into issues. I’m not sure what went wrong—I tried various approaches, including using the HTTP API, but none of them worked.

Hey @Anthony

Could you share the project name and the version associated with the model? There are some model architectures that aren’t yet supported. Having that information will help us reproduce, and therefore identify, any issues that are happening