Uploading results from roboflow/auto-annotate treats polygons as bounding boxes

I have an instance segmentation model deployed on Roboflow.
I run the GitHub - roboflow/auto-annotate: A simple tool for automatic image annotation using Roboflow API and it successfully generates prediction json files.
When I upload the images and generated json files Roboflow is treating the predictions.points as bounding boxes and not generating polygon annotations.

How can I upload the instance segmentation results and have Roboflow treat them correctly?

The repo was made with bounding box labels in mind.

For direct assistance with the auto-annotate repository, please post an Issue in the repository: Issues · roboflow/auto-annotate · GitHub

@Mohamed thanks for the fast reply. Is there documentation on what the upload format is expected to be? I can reformat the data myself if I have to.

@saschwarz Do you have a saved example of one of the JSON label files that were being uploaded with the method you used?

And have you checked your that you’re uploading to the correct project type?

  • e.g bounding box object detection predictions to an object detection project, or instance segmentation predictions to an instance segmentation project)

This will help us know what may have caused the issue.

And for the format in the main repo (for object detection):

The annotation are saved in the Roboflow JSON Response Object format, as all Roboflow model predictions have a JSON Response Object you can access:

image


Upload Functionality for Labels and Images:

image_path: str = None,
annotation_path: str = None,
hosted_image: bool = False,
image_id: int = None,
split: str = "train",
num_retry_uploads: int = 0,
batch_name: str = DEFAULT_BATCH_NAME,
tag_names: list = [],
is_prediction: bool = False,
**kwargs,
):
"""Upload image function based on the RESTful API
Args:
    image_path (str) - path to image you'd like to upload
    annotation_path (str) - if you're upload annotation, path to it
    hosted_image (bool) - whether the image is hosted
    image_id (int) - id of the image
    split (str) - to upload the image to
    num_retry_uploads (int) - how many times to retry upload on failure
    batch_name (str) - name of batch to upload to within project
    tag_names (list[str]) - tags to be applied to an image
    is_prediction (bool) - whether the annotation data is a prediction rather than ground truth
Returns:
    None - returns nothing
def __annotation_upload(
    self, annotation_path: str, image_id: str, is_prediction: bool = False
):
    """function to upload annotation to the specific project
    :param annotation_path: path to annotation you'd like to upload
    :param image_id: image id you'd like to upload that has annotations for it.
    """

    # stop on empty string
    if len(annotation_path) == 0:
        print("Please provide a non-empty string for annotation_path.")
        return {"result": "Please provide a non-empty string for annotation_path."}

    # check if annotation file exists
    elif os.path.exists(annotation_path):
        print("-> found given annotation file")
        annotation_string = open(annotation_path, "r").read()

    # if not annotation file, check if user wants to upload regular as classification annotation
    elif self.type == "classification":
        print(f"-> using {annotation_path} as classname for classification project")
        annotation_string = annotation_path

    # don't attempt upload otherwise
    else:
        print(
            "File not found or uploading to non-classification type project with invalid string"
        )
        return {
            "result": "File not found or uploading to non-classification type project with invalid string"
        }

    # Set annotation upload url

    project_name = self.id.rsplit("/")[1]

    self.annotation_upload_url = "".join(
        [
            API_URL + "/dataset/",
            self.__project_name,
            "/annotate/",
            image_id,
            "?api_key=",
            self.__api_key,
            "&name=" + os.path.basename(annotation_path),
            "&is_prediction=true" if is_prediction else "",
        ]
    )

    # Get annotation response
    annotation_response = requests.post(
        self.annotation_upload_url,
        data=annotation_string,
        headers={"Content-Type": "text/plain"},
    )

    # Return annotation response
    return annotation_response

@Mohamed Thanks for the detailed info.

  • My project is definitely an instance segmentation project.
  • The json file is a response object from that project and contains predictions.points with values

I used the UI to upload the image and json files, I assumed it would “just work”.

I will try using the API to do the upload and check the results.

1 Like

@Mohamed unfortunately, I get the same result using the API.

Here’s part of my annotations file (generated by Roboflow):

{
    "predictions": [
        {
            "x": 325.0,
            "y": 1226.0,
            "width": 60.0,
            "height": 322.0,
            "confidence": 0.9688357710838318,
            "class": "Weave Poles 12",
            "points": [
                {
                    "x": 299.8046875,
                    "y": 1385.7005708454208
                },
                {
                    "x": 296.30295257491224,
                    "y": 1376.0
                },
                {
                    "x": 297.10811289123984,
                    "y": 1312.0
                },
                {
                    "x": 304.0752526898058,
                    "y": 1296.0
                },
                {
                    "x": 306.9442384466651,
                    "y": 1244.8000000000002
                },
                {
                    "x": 315.9199401701312,
                    "y": 1222.4
                },
                {
                    "x": 316.47860157651996,
                    "y": 1196.8
                },
                {
                    "x": 325.50313513077737,
                    "y": 1164.8
                },
                {
                    "x": 326.10573726016736,
                    "y": 1139.2
                },
                {
                    "x": 335.09339685369946,
                    "y": 1113.6000000000001
                },
                {
                    "x": 337.95564157857143,
                    "y": 1065.6000000000001
                },
                {
                    "x": 352.5703125,
                    "y": 1064.862048165479
                },
                {
                    "x": 352.6696640154906,
                    "y": 1107.2
                },
                {
                    "x": 345.829706672372,
                    "y": 1120.0
                },
                {
                    "x": 343.0482209006681,
                    "y": 1161.6000000000001
                },
                {
                    "x": 334.0227594283532,
                    "y": 1184.0
                },
                {
                    "x": 333.5718575274976,
                    "y": 1212.8
                },
                {
                    "x": 324.43500994973067,
                    "y": 1241.6000000000001
                },
                {
                    "x": 323.79906792035376,
                    "y": 1273.6000000000001
                },
                {
                    "x": 314.8615035996504,
                    "y": 1305.6000000000001
                },
                {
                    "x": 314.3409620912902,
                    "y": 1350.4
                },
                {
                    "x": 299.8046875,
                    "y": 1385.7005708454208
                }
            ],
            "image_path": "input/340109653_1378244462967163_5103473470841864968_n.jpg",
            "prediction_type": "InstanceSegmentationModel"
        },
...
],
    "image": {
        "width": "1535",
        "height": "2048"
    }
}

I updated the auto annotate code locally to do the upload:

def annotate(
    source_image_directory: str,
    target_annotation_directory: str,
    roboflow_api_key: str,
    roboflow_project_id: str,
    roboflow_project_version: int,
    detection_confidence_threshold: int = 40,
    detection_iou_threshold: int = 30,
) -> None:
    image_source_paths = flatten_lists(
        [
            get_directory_content(
                directory_path=source_image_directory, extension=extension
            )
            for extension in SUPPORTED_IMAGE_FORMATS
        ]
    )[:1]
    rf = Roboflow(api_key=roboflow_api_key)
    project = rf.workspace().project(roboflow_project_id)
    model = project.version(roboflow_project_version).model

    for image_source_path in tqdm(image_source_paths):
        annotations = model.predict(
            image_path=image_source_path,
            confidence=detection_confidence_threshold,
            # overlap=detection_iou_threshold
        ).json()
        source_image_file_name = os.path.basename(image_source_path)
        file_name = os.path.splitext(source_image_file_name)[0]
        target_json_file_name = f"{file_name}.json"
        target_json_path = os.path.join(
            target_annotation_directory, target_json_file_name
        )
        dump_to_json(target_json_path, annotations)
        project.upload(
            image_source_path, "output/" + target_json_file_name, image_id=file_name
        )

Here’s what the image looks like in Roboflow:

You can see in the prediction payload it was generated from an instance segmentation payload and
Here’s confirmation that the project is also instance segmentation
Screenshot 2023-04-20 at 9.36.59 AM

Any help is appreciated!

(1) Is this what the image looks like in Roboflow for the annotations you were uploading for the image? As in, this was a successful example?

(2) Or are you saying this was an image that already existed, and you were looking to push new labels to it?

(3) Or is it rather, this is what the image should look like if the annotations upload correctly with the annotate API or Upload API methods?

@Mohamed it is (1) I successfully annotated the image using the annotation api, uploaded the generated JSON file (which contains points not bounding boxes), and that is what the image looks like after upload (either manually uploaded using Roboflow UI or uploaded via the Roboflow python library to API)

It appears the Roboflow server is treating prediction points in the Roboflow JSON format for an instance segmentation model as though they are bounding boxes.

The image should have segmentation polygons.

@Mohamed To further clarify, it is a successful upload BUT Roboflow is not showing the correct annotations. Roboflow is incorrectly importing the segmentation polygons it is converting them to bounding boxes.

Is there any other way to upload instance segmentation annotations into Roboflow?

@Mohamed has anyone reproduced this bug? Is there any work around to upload instance segmentation annotations into Roboflow?

Yess pls I too need a solution for this.
@saschwarz & @Mohamed Did you get the solution?

1 Like

@Spyderman @saschwarz its some issue with the system parsing the saved labels. COCO JSON and YOLO formats for instance segmentation will work: Computer Vision Annotation Formats

Predictions saved in those formats should do the trick. I’ll take a look at a code snippet for that this week to improve the upload process.

Having a similar issue. Any progress here?

I’m using the same code to upload another set of annotated images. But fails.
File not found or uploading to non-classification type project with invalid string

Not sure what this means.
The only thing is that the new JSON contains image_id=0 again. But every import should be treated as a batch of images?