Roboflow API Inference Error

Please share the following so we may better assist you:
When trying to do API inference with a yolov8 model deployed on Roboflow, I get the following error(see screenshot): 413: Payload too large for URL

I was wondering how I could solve this?

  1. Project type: Object Detection
  2. The operating system & browser you are using and their versions: Google Colab
  3. The screenshot of the error being triggered in your browser’s developer tools/console. Please copy/paste the url below to watch how to pull up your devtools

Hi @Alper - what is the pixel size, both width and height, on the image(s) you’ve tried with?

Additionally, what Resize parameter did you resize the images to when you generated the version used for training your model?

Here’s an example of the Resize parameter on one of my datasets:

Hi, thank you for your response! The resize size should be 640 x 640, do I need to resize before I send it to the API? I originally thought it does the resize preprocess automatically. Thank you in advance!

I believe the image is 1080p

I just tried inference with a dataset version of mine that was resized to 640x640 prior to training.

The image I ran inference with was 1080pixels X 820pixels - it did give me a result without error:

When did you upload the YOLOv8 model weights that you’re using for inference? Was it sometime in the last day or 2, or right after the Upload Weights feature was released?

Can you email me the image to test? I’ll send you an email

My test was successful.

I just tried it with both version 3 (YOLOv8n) and 4 (YOLOv8s) of your project (infer credits were added back to make up for the tests I ran).

It seems that what may have happened is that Colab automatically timed out the request? I noticed that the image was 4k resolution, but the file size was only about 800KB when I downloaded it.

Higher image resolution is great for capturing images and labeling data, but as you’ve already seen, “smaller” image sizes speeds up model training and inference.

The best course of action will be to resize images prior to inference. Additionally, I’m including the sample script I used for inference, in case you’d also like to try it on Colab again, or on your host machine.

from roboflow import Roboflow

rf = Roboflow(api_key="INSERT_PRIVATE_API_KEY")

# List all projects for your workspace
workspace = rf.workspace("mohamed-traore-2ekkp")

# Load a specific project
project = workspace.project("face-detection-mik1i")

# Load a specific version of a project
version = project.version(18)

# Retrieve the model of a specific project
model = version.model

# predict on a local image
img_file = "park_faces-1080.jpg"

prediction = model.predict(img_file, confidence=40, overlap=30)

# Save the prediction as an image'predictions.jpg')

# Convert predictions to JSON and print the result

# Plot the prediction

Here’s another example that utilizes the Hosted API, directly:

import cv2
import base64
import numpy as np
import requests

# Construct the Roboflow Infer URL
# (if running locally replace with eg
#for my face detection project, the model id is: "face-detection-mik1i"
#for my face detection project, the version number is: "18" for this inference example
#resize value - for my face detection project, v18, it is 640
#path to image for inference, insert path between the empty quotes
img_path = ""

upload_url = "".join([

# Infer via the Roboflow Hosted Inference API and return the result
def infer(img, ROBOFLOW_SIZE, upload_url):
    # Resize (while maintaining the aspect ratio) to improve speed and save bandwidth
    height, width, channels = img.shape
    scale = ROBOFLOW_SIZE / max(height, width)
    img = cv2.resize(img, (round(scale * width), round(scale * height)))

    # Encode image to base64 string
    retval, buffer = cv2.imencode('.jpg', img)
    img_str = base64.b64encode(buffer)

    # Get prediction from Roboflow Infer API
    resp =, data=img_str, headers={
        "Content-Type": "application/x-www-form-urlencoded"
    }, stream=True).raw

    # Parse result image
    image = np.asarray(bytearray(, dtype="uint8")
    image = cv2.imdecode(image, cv2.IMREAD_COLOR)

    return image

img_file = cv2.imread(img_path)
result = infer(img_file, ROBOFLOW_SIZE, upload_url)

cv2.imwrite("prediction.jpg", result)