{ "error": [ "Error opening image. Check that you are sending a properly formatted image." ] }

My initial hunch is the image size you’re sending.

Can you try resizing the image in your inference code (prior to sending it to the API) to the size you have preprocessed it to during version generation?

For example, if you resized to 416x416, resize the image to 416x416 pixels. If you selected 640x640 (or even if you haven’t resized), then set the image to 640x640.

Here’s an example in Python for sending a local image file (resized) to the API for inference:

Note on the config file: you can omit “EMAIL”, “__comment2”, “FRAMERATE”, and “BUFFER" from the file since they’re used only if you’re sending an email with the inference results or running this with a webcam.

import json
import cv2
import base64
import numpy as np
import requests
import time

# load config
with open('roboflow_config.json') as f:
    config = json.load(f)

    ROBOFLOW_API_KEY = config["ROBOFLOW_API_KEY"]
    ROBOFLOW_MODEL = config["ROBOFLOW_MODEL"]
    ROBOFLOW_SIZE = config["ROBOFLOW_SIZE"]

# Construct the Roboflow Infer URL
# (if running locally replace https://detect.roboflow.com/ with eg http://127.0.0.1:9001/)
upload_url = "".join([
    "https://detect.roboflow.com/",
    ROBOFLOW_MODEL,
    "?api_key=",
    ROBOFLOW_API_KEY,
    "&format=image",
    "&stroke=2"
])

# Infer via the Roboflow Hosted Inference API and return the result
def infer(img):
    # Resize (while maintaining the aspect ratio) to improve speed and save bandwidth
    height, width, channels = img.shape
    scale = ROBOFLOW_SIZE / max(height, width)
    img = cv2.resize(img, (round(scale * width), round(scale * height)))

    # Encode image to base64 string
    retval, buffer = cv2.imencode('.jpg', img)
    img_str = base64.b64encode(buffer)

    # Get prediction from Roboflow Infer API
    resp = requests.post(upload_url, data=img_str, headers={
        "Content-Type": "application/x-www-form-urlencoded"
    }, stream=True).raw

    # Parse result image
    image = np.asarray(bytearray(resp.read()), dtype="uint8")
    image = cv2.imdecode(image, cv2.IMREAD_COLOR)

    return image


img_file = cv2.imread(“INSERT_FILE_PATH”)
result = infer(img_file)

cv2.imshow(‘predictions’, result)