Please share the following so we may better assist you:
When trying to do API inference with a yolov8 model deployed on Roboflow, I get the following error(see screenshot): 413: Payload too large for URL
I was wondering how I could solve this?
Project type: Object Detection
The operating system & browser you are using and their versions: Google Colab
The screenshot of the error being triggered in your browser’s developer tools/console. Please copy/paste the url below to watch how to pull up your devtools
Hi, thank you for your response! The resize size should be 640 x 640, do I need to resize before I send it to the API? I originally thought it does the resize preprocess automatically. Thank you in advance!
When did you upload the YOLOv8 model weights that you’re using for inference? Was it sometime in the last day or 2, or right after the Upload Weights feature was released?
I just tried it with both version 3 (YOLOv8n) and 4 (YOLOv8s) of your project (infer credits were added back to make up for the tests I ran).
It seems that what may have happened is that Colab automatically timed out the request? I noticed that the image was 4k resolution, but the file size was only about 800KB when I downloaded it.
Higher image resolution is great for capturing images and labeling data, but as you’ve already seen, “smaller” image sizes speeds up model training and inference.
The best course of action will be to resize images prior to inference. Additionally, I’m including the sample script I used for inference, in case you’d also like to try it on Colab again, or on your host machine.
from roboflow import Roboflow
rf = Roboflow(api_key="INSERT_PRIVATE_API_KEY")
# List all projects for your workspace
workspace = rf.workspace("mohamed-traore-2ekkp")
# Load a specific project
project = workspace.project("face-detection-mik1i")
# Load a specific version of a project
version = project.version(18)
# Retrieve the model of a specific project
model = version.model
# predict on a local image
img_file = "park_faces-1080.jpg"
prediction = model.predict(img_file, confidence=40, overlap=30)
# Save the prediction as an image
prediction.save(output_path='predictions.jpg')
# Convert predictions to JSON and print the result
print(prediction.json())
# Plot the prediction
prediction.plot()
Here’s another example that utilizes the Hosted API, directly:
import cv2
import base64
import numpy as np
import requests
# Construct the Roboflow Infer URL
# (if running locally replace https://detect.roboflow.com/ with eg http://127.0.0.1:9001/)
ROBOFLOW_API_KEY = "INSERT_PRIVATE_API_KEY"
#for my face detection project, the model id is: "face-detection-mik1i"
ROBOFLOW_MODEL_ID = "INSERT_MODEL_ID"
#for my face detection project, the version number is: "18" for this inference example
MODEL_VERSION = "INSERT_VERSION#"
#resize value - for my face detection project, v18, it is 640
ROBOFLOW_SIZE = 640
#path to image for inference, insert path between the empty quotes
img_path = ""
upload_url = "".join([
"https://detect.roboflow.com/",
ROBOFLOW_MODEL_ID, "/",
MODEL_VERSION,
"?api_key=",
ROBOFLOW_API_KEY,
"&format=image",
"&stroke=2"
])
# Infer via the Roboflow Hosted Inference API and return the result
def infer(img, ROBOFLOW_SIZE, upload_url):
# Resize (while maintaining the aspect ratio) to improve speed and save bandwidth
height, width, channels = img.shape
scale = ROBOFLOW_SIZE / max(height, width)
img = cv2.resize(img, (round(scale * width), round(scale * height)))
# Encode image to base64 string
retval, buffer = cv2.imencode('.jpg', img)
img_str = base64.b64encode(buffer)
# Get prediction from Roboflow Infer API
resp = requests.post(upload_url, data=img_str, headers={
"Content-Type": "application/x-www-form-urlencoded"
}, stream=True).raw
# Parse result image
image = np.asarray(bytearray(resp.read()), dtype="uint8")
image = cv2.imdecode(image, cv2.IMREAD_COLOR)
return image
img_file = cv2.imread(img_path)
result = infer(img_file, ROBOFLOW_SIZE, upload_url)
cv2.imwrite("prediction.jpg", result)