Is it possible to local deploy semantic segmentation project?

I’ve created a semantic segmentation project in app.roboflow.com. I’ve labelled a dataset and trained a model using the “generate” tab.

Now I’m trying to run the inference API locally on my machine following the docs, but I can’t figure out which endpoint to use to run the model.

Is it possible to run my semantic segmentation model locally? Or is it only supported via the hosted API?

When I run the curl command, I receive the response:
{“message”:“‘semantic-segmentation’”}
No segmentation mask is provided. At the same time on the server I see a 500 internal server error: KeyError: ‘semantic-segmentation’.

As noted here, when using the hosted API I had to switch url from detect.roboflowcom to segment.roboflowcom, as detect.roboflow.com was behaving in the same way (just responding the json message with no segmentation result). It seems like there must be some corresponding change to get it running locally (use a different docker image server?), however I can’t find any documentation around this.

Hi @tim-fan

We recently open-sourced our inference server! Check out these resources for the most detailed and up to date information:

Hey @tim-fan ! @leo provided some good docs links for our open source package that powers roboflow deployments. Currently, that package doesn’t support semantic segmentation, but that’s definitely on our roadmap! We are also always open to new contributors so feel free to take a stab at adding it if you’re interested!

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.