Raspberry (Node.js code): 'No trained model was found for this model version'

Hello to all, I’m quite new with Roboflow and I need a small help form the community
I trained a public model to detect my cats: gigio-and-astra Instance Segmentation Dataset and Pre-Trained Model by Home4Me
I installed the Inference server on my RaspberryPi 4, and started the docker container.
All rights here.
But when I try to inference an image using the Url http://pollicino:9001/gigio-and-astra/2, I receive an error telling me:

{
  error: {
    error: {
      message: 'No trained model was found for this model version.',
      type: 'GraphMethodException',
      hint: 'You must train a model on this version with Roboflow Train before you can use inference.',
      e: [Array]
    }
  }
}

Here my (dummy) js code:


require("dotenv").config();
const sharp = require("sharp");
const axios = require("axios");
const fs = require("fs");

async function test() {
  const resizedImageBuf = await sharp("./assets/IMG_2534.JPG")
    .rotate()
    .resize(500)
    .jpeg({ mozjpeg: true })
    .toBuffer();

  var image = `data:image/png;base64,${resizedImageBuf.toString("base64")}`;

  axios({
    method: "POST",
    url: `${process.env.HOSTED_API}`,
    params: {
      api_key: `${process.env.API_KEY}`,
    },
    data: image,
    headers: {
      "Content-Type": "application/x-www-form-urlencoded; charset=UTF-8",
    },
  })
    .then(function (response) {
      console.log(response.data);
    })
    .catch(function (error) {
      console.log(error.message);
    });
}

test();

Thanks for any help!

I tried also the python code:

rf = Roboflow(api_key="xx")
project = rf.workspace("home4me").project("gigio-and-astra")
model = project.version(2, local="http://localhost:9001").model
prediction = model.predict("IMG_2534.JPG", confidence=40)

but in this case the docker container raises an error:

TypeError: req.body.replace is not a function
    at transformImageBody (/inference-server/server/index.js:323:26)
    at Layer.handle [as handle_request] (/inference-server/server/node_modules/express/lib/router/layer.js:95:5)
    at next (/inference-server/server/node_modules/express/lib/router/route.js:144:13)
    at Route.dispatch (/inference-server/server/node_modules/express/lib/router/route.js:114:3)
    at Layer.handle [as handle_request] (/inference-server/server/node_modules/express/lib/router/layer.js:95:5)
    at /inference-server/server/node_modules/express/lib/router/index.js:284:15
    at param (/inference-server/server/node_modules/express/lib/router/index.js:365:14)
    at param (/inference-server/server/node_modules/express/lib/router/index.js:376:14)
    at param (/inference-server/server/node_modules/express/lib/router/index.js:376:14)
    at Function.process_params (/inference-server/server/node_modules/express/lib/router/index.js:421:3)

Hi, did you resolve this issue? I’m getting the same error now on a model I trained in a Colab Notebook and deployed back to Roboflow. The deployed model is available on my page but when I try to use it in my web app, I get this error:

{
    "error": {
        "message": "No trained model was found for this model version.",
        "type": "GraphMethodException",
        "hint": "You must train a model on this version with Roboflow Train before you can use inference.",
        "e": [
            "Could not parse size from model.json"
        ]
    }
}

Hi @MainBearing, are you also using the inference server docker?

Hi @stellashphere, no I’m just pulling it directly into a web page on my local test rig for now. I retrained the model on a new version and the issue is now resolved. No idea what went wrong with the previous version.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.