Multiple project hosting on an inference server

Hi

Is it possible to deploy multiple projects for inference to eg Jetson? I have tried one project and that works well but on using the command for invoking inference on a second project on the same host it reports “No trained model was found for this model version”. I have followed the same steps with both projects.

Thansk in advance for any tips.

Hi @michael_dibley - when you sent the response for the second model, did you update the model-id and dataset version to a dataset version trained with Roboflow Train?

It should work with just running the “sudo docker run” to launch the inference server with one model.

No need for two servers; just hit the second endpoint on the same endpoint you pulled in the first time & it should load in the weights for the second model, too (so long as that model was trained with Roboflow Train, and you also pass in the correct API key for your calls).

Hi Mohamed

Thanks very much for the response

Yes in the inference request i have two strings each with the appropriate project name, version and api key.

With the second one that gives an error on the Jetson, when i substitute the Roboflow server name in the string request for inference instead of my local Jetson - that works ok so the confiration for that host at least is ok.

The docker container on the Jetson starts on boot and i have restarted it several times, and the first one still works so the endpoint is active.

I set the first one up ~4 months ago and dont remember doing anythign different yesterday in the preparation process regarding training - jsut following the Roboflow process - in step 5 ‘generate’ there aren’t any model config options.

Any other ideas?

Thanks again