Issue with running inference in Python Script using Google Cloud Functions API

Hi all,

I’m new to Roboflow and on my first test deployment.
I intend to capture an image via mobile and send the image to a python script running as a Google Cloud Function which calls my trained Roboflow object detection model.

Here is the python script on Google Cloud Functions to run inference on the received image.

from flask import Flask, request
from roboflow import Roboflow

app = Flask(__name__)

# Set up your Roboflow API key, model endpoint, and version

# Initialize the Roboflow client
rf = Roboflow(api_key=ROBOFLOW_API_KEY)
project = rf.workspace().project(ROBOFLOW_MODEL_ENDPOINT)
model = project.version(ROBOFLOW_VERSION).model

@app.route("/upload", methods=["POST"])
def upload(request):
    # Receive the incoming image file from the Android app
    image_file = request.files["image"]
    image_data =
    image_filename = image_file.filename

    # Run inference on the image using the Roboflow API
    prediction = model.predict(image_data, confidence=40, overlap=30)
    results = prediction.json()

    # Return the results of the inference
    return results

if __name__ == "__main__":

Now, when I try to test this script by sending a local image via the below, it seems like within the roboflow doesn’t like the data received and throws errors.

# Send the image data to the Cloud Function using a POST request with multipart/form-data
response =, files={"image": ("image.jpg", image_data, "image/jpeg")})

What am i missing or doing incorrectly?


I modified the python script to the following, with no success :roll_eyes:

@app.route("/upload", methods=["POST"])
def upload(request):
    # Receive the incoming image file
    image_file = request.files["image"]
    image_filename = image_file.filename

    # Convert the image data to a base64 encoded string
    with open(image_filename, "rb") as f:
        img_data =
        base64_img_data = base64.b64encode(img_data)

    # Run inference on the image using the Roboflow API
    prediction = model.predict(base64_img_data, confidence=40, overlap=30)
    results = prediction.json()

Additionally, this is the python trace:

Traceback (most recent call last): File "/layers/google.python.pip/pip/lib/python3.9/site-packages/flask/", line 2190, in wsgi_app response = self.full_dispatch_request() File "/layers/google.python.pip/pip/lib/python3.9/site-packages/flask/", line 1486, in full_dispatch_request rv = self.handle_user_exception(e) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/flask/", line 1484, in full_dispatch_request rv = self.dispatch_request() File "/layers/google.python.pip/pip/lib/python3.9/site-packages/flask/", line 1469, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/functions_framework/", line 99, in view_func return function(request._get_current_object()) File "/workspace/", line 38, in upload prediction = model.predict(base64_img_data) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/roboflow/models/", line 54, in predict self.__exception_check(image_path_check=image_path) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/roboflow/models/", line 126, in __exception_check raise Exception("Image does not exist at " + image_path_check + "!") TypeError: can only concatenate str (not "bytes") to str

Hi @kedar

It looks like there’s an issue with loading your image when predicting. Are you working with local files or image URLs?