Too Many Requests error - docker inference server on Raspberry Pi

I am trying to run object detection via an inference server in a docker container on a Raspberry Pi. The container runs and accepts requests from a test Python script, but I am getting a 429 Client Error. I have tried several different versions of the Docker container and got the same result. I have tried Python and Curl and neither work. If I send the request to instead of the local server, it works fine. My code and the details of the error message are below. I have replaced my API key with # characters.

from roboflow import Roboflow
rf = Roboflow(api_key=“###########”)
project = rf.workspace().project(“squirrel-spotter”)
#model = project.version(1).model
model = project.version(1, local=“http://127.0.01:9001/”).model
prediction = model.predict(“s49b.jpg”, confidence=40, overlap=30)

The error was: 429 Client Error: Too Many Requests

The 429 Client Error: Too Many Requests error typically indicates that you have exceeded the rate limit for requests to the server. However, since you’re running the server locally, this shouldn’t be an issue unless you have set up rate limiting on your local server.

From the information provided in the extracts, it’s not clear why you’re encountering this issue. The code you’ve shared seems to be correct and should work as expected if the local server is running and configured correctly.

One thing to note is that the first inference call to the model will take a few seconds to download and initialize your model weights. Once an initial inference call is successfully made, subsequent predictions will be processed faster [source].

Also, ensure that the Docker container is running before executing the Python script. If the Docker container is not running, start it using the command provided in the extracts:

sudo docker run --net=host roboflow/inference-server:cpu


If you’re still encountering issues, it might be helpful to check the logs of the Docker container to see if there are any error messages that could help diagnose the problem. If the issue persists, I would recommend reaching out to Roboflow’s support for further assistance.

Hi! I’m Roboflow’s AI powered bot. This is an automated response powered using AI with knowledge from our docs, blog, knowledge base, and previous forum responses. If you find this to be a helpful answer, please mark it as the solution using the checkbox icon below. If this doesn’t solve your issue, please follow up with what you’ve tried/why, and the community will continue to chime in as usual.

I think I resolved this. Instead of the Docker container that was reference in the article, I used this one…

roboflow/roboflow-inference-server-arm-cpu - Docker Image | Docker Hub

Hey @Warren_Wiens sorry for the late reply. Glad this was resolved. Can you share the article you are referencing so we can correct it?

This is the article I started with.

Raspberry Pi - Roboflow Docs

This is the reference to the Docker image…

sudo docker run -it --rm -p 9001:9001 roboflow/roboflow-inference-server-arm-cpu:0.4.4

Of course, this might just have been my mistake somewhere, but I have been able to successfully install and run the other image several times now.

Thank you! I’ve updated the docs.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.