The validate step works- but I think I have the URL syntax incorrect- I’m pretty sure I have the access token correct, but not sure if the endpoint is just the model id ??
So my URL would be in the format http://localhost:9001/mymodel-abcdef?access_token=ab_cd7aBCXYZAB6xyzABxyz12abcde12
I get an error message no model here - the model enpoint must be passed as in the hosted API (see docs.roboflow.com)
I’m trying this on the public plan- assuming this is not the issue
Mine did the same thing, I got the 500 message, which is “Error!”
It worked well with the coco example, and returned a 200. It specified some coding mistake from the python, and since I didn’t write that code, I couldn’t check the lines.
@andrewh@Russ76 We are working on cleaning up our OAK deployment into a PIP package - the docker deployment strategy has been notoriously complicated, and some nuance there is likely causing the issue here. The new idea is that as long as you can install and run depthai on your host’s python, the new deployment will work.
Great! The DepthAI tools work well with the camera. The Roboflow web pages work well to create an AI network. I just couldn’t get my custom data to work in the camera without the web api. And I plan to deploy this on a yard robot, so it must be automatic and fast, no manual handling of images or web pages.
Also, I couldn’t see how to get one of your “public” datasets onto my workspace, so that I could merge it with my dataset and create a better network!
We are anxious for the PIP routine for Raspberry Pi. That sounds great! I will record a video of my robot working for you to use when it is deployed and perfected. Meanwhile, the dandelions grow…
Docker will load onto the Raspberry Pi, version 4 with 4 Gig memory. But still won’t work with your Docker programs. Below is output from Raspian Buster and Ubuntu Jenny.