Can't get inference.js to work

I am trying to get the most basic web-based version using inference.js to work and struggling.

I want to avoi use node.js, and the web browser page assumes node (as far as I can tell):

The hand detection web app code they provide doesn’t work either and if you look in the source code for that it bears no relation to what they suggest.
https://demo.roboflow.com/egohands-public/

I tried chatGPT and it cannot solve it.

Has anyone got a vanilla js demo of sending an image to inference.js and getting a response?

Many thanks

I have also tried the replit version and it only seems to work with their coco model. If I try to swap to my model it pulls errors.

Can someone from the roboflow team confirm inferencejs works? All the online roboflow examples only use roboflow.js. Should I be using roboflow.js?

Many thanks

The replit example works if I use the provided models. If I swap to my model this is the error I get:

{
    "error": {
        "message": "No trained model was found for this model version.",
        "type": "GraphMethodException",
        "hint": "You must train a model on this version with Roboflow Train before you can use inference.",
        "e": [
            {}
        ]
    }
}

I trained my model using the yolov11 colab provided by roboflow:

I’m slightly confused why no-one from roboflow will respond to this thread?

@joaomarcoscrs-roboflow
@peter_roboflow
@leandro_roboflow

Maybe you can help or direct another roboflow employee to this query?

@DHDPIC thanks for the tag – can you try with a yolov8 model?

@DHDPIC I can confirm inferencejs works and has been tested on yolov8 and yolov11 models. The error you are referencing seems to suggest that model weights were not uploaded roboflow correctly or there was an error in converting the weights on our end. Are you able to test that model using the hosted API?

Thank you @peter_roboflow @Maxwell for the reply, I really appreciate it.

I was able to use the model in a python script with the inference library. Please see the visual output below.

The model works fine on its page:

I originally trained the model using this colab:

And I followed this roboflow resource:

Now, I just went back to the colab and all my training and data has disappeared from the colab. I guess it clears after disconnecting.

Do I need to retrain the model to get the weights? I thought the weights were automatically uploaded to roboflow after training using this line in the colab:

project.version(dataset.version).deploy(model_type="yolov11", model_path=f"{HOME}/runs/detect/train/")

Or do I need to manually upload the weights myself?

I could try the hosted API, but I am a novice with node and python so I will need to find a good resource/script to help me through that.

I really appreciate answers to any questions and pointers to resources so I can get this model deployed on the web!

Hello, I tried inference on the Hosted API and I think it works fine using python. I can print the prediction results which look accurate to me:

{'inference_id': '10092595-61ea-4c13-b65f-fd1f92cc485f', 'time': 0.08718009199947119, 'image': {'width': 1300, 'height': 956}, 'predictions': [{'x': 1082.0, 'y': 663.5, 'width': 150.0, 'height': 201.0, 'confidence': 0.8892155885696411, 'class': 'cottontail-rabbit', 'class_id': 1, 'detection_id': 'c76a86cd-9461-4654-975f-de33b4701aad'}, {'x': 408.5, 'y': 331.0, 'width': 711.0, 'height': 660.0, 'confidence': 0.8846428990364075, 'class': 'mule-deer', 'class_id': 3, 'detection_id': '50ee5475-5455-48fd-b015-c01cff842811'}]}

Here is the script (with private details removed):

from inference_sdk import InferenceHTTPClient

CLIENT = InferenceHTTPClient(
    api_url="https://detect.roboflow.com",
    api_key="my_api_key_is_put_here"
)

result = CLIENT.infer("images/cottontail1.jpg", model_id="cowildlife/3")

print(result)

I can’t get an image rendered the same way as the Inference SDK, I don’t think the hosted API returns an image in the results. My python skills are not up to doing that in PIL or CV.

Hopefully that helps to understand the state of the model I trained.

Thanks, David

Would using the frames from python be good enough? Because you can visualize the results with supervision

import supervision as sv
import cv2
image = cv2.imread("images/cottontail1.jpg")
detections = sv.Detections.from_inference(result)
box_annotator = sv.BoxAnnotator()
label_annotator = sv.LabelAnnotator()

annotated_image = box_annotator.annotate(
    scene=image, detections=detections)
annotated_image = label_annotator.annotate(
    scene=annotated_image, detections=detections)

Otherwise, you’ll probably need to retrain. Looks like there’s a bug with yolov11 model upload I’m looking into now – you can either retrain with yolov8 or wait for me to fix yolov11 and retrain :confused:

The bug in yolov11 model upload has been fixed. You can reupload again if you have access to your pytorch weights. Otherwise, you can retrain if that’s easy, or if that’s not possible we may be able to get it working on a new version for you if you add roboflow support to the account.

1 Like

Thanks @peter_roboflow that is awesome. I appreciate the support!

I will try to retrain tomorrow and report back to this thread.

OK a bit disappointing, I tried retrained the model on the colab, and when I get to the deploy command:

project.version(dataset.version).deploy(model_type="yolov11", model_path=f"{HOME}/runs/detect/train/")

I get this error:
An error occured when getting the model upload URL: This version already has a trained model. Please generate and train a new version in order to upload model to Roboflow.

So now I have to redo the whole process with a new version, even though the dataset is completely the same? Why cannot I just replace the model on an existing version? Perhaps there is a good reason.

Ah, sorry to hear that. We only support one model per version right now, and we don’t want to allow users to accidentally overwrite their stuff. The model upload does need to be on a new version that has no model on it currently.

You shouldn’t need to do the whole process again if you still have access to the model weights. Just the deploy step.

1 Like

No problem. I retrained again, I’m only doing 10 epochs at the moment. But I will know to make a new dataset version for another time with more epochs.

Good news: it is working now on the web!

I have quick test page up here:
https://datawalking.com/wild/robo/

I had to chatgpt your demo script to get it working with vanilla js.

Thanks for your help @peter_roboflow I really appreciate it.

Inference time can be very variable, anything from 1.5seconds to 16seconds. What causes that?

The app demo with the hand detector is now working with my model and runs well on a live video. Superb!

2 Likes

Awesome! Thanks for reporting this!

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.