Differences between different kind of annotations with the same model

Why are there differences between the annotations generated in roboflow’s label assist and those generated in a python script? Of course, both are using the same model, with the same parameters:

confidence = 60%
overlap = 30%

My project is about Object Detection.

We can see the difference in the image below:

On the left is the annotation generated by the script, with 14 bounding box, some of them overlapping.
On the right is the annotation generated by the label assist, with only 6 bounding box and none of them are overlapping.

I just would like to get the result of the right using the script.

My code is shown below:

# Initialize the Roboflow object with your API key
rf = Roboflow(api_key="MY_API_KEY")

# Retrieve your current workspace and project name
workspace = rf.workspace()

# Specify the project for upload
project = workspace.project("tdm")

# Import the model from the project
n_version = 12
model = project.version(n_version).model
model.confidence = 60
model.overlap = 30

character_id = 0
n_pages = len(os.listdir(cbs[character_id]['path']))

for (i, file_name) in enumerate(os.listdir(cbs[character_id]['path'])):
        tags = []
        if i == 0:
        elif i == n_pages-1:
        image_path = cbs[character_id]['path'] + file_name
        annotation_path = cbs[character_id]['path'] + 'Annotation/' + file_name.split('.')[0] + '.json'

        json_annotation = json.dumps(model.predict(image_path).json(), indent=4)
        with open(annotation_path, "w") as outfile:

            image_path = image_path,
            annotation_path = annotation_path,
            batch_name = cbs[character_id]['batch_name'],
            tag_names = tags,
            sequence_number = i+1,
            sequence_size = n_pages+1,
            is_prediction = True,

I really don’t understand the problem…

Hi @jvprimakipr

It looks like you are using the Roboflow Python SDK on an object detection model.

Here, it looks like you are assigning the minimum confidence and overlap as properties of the model variable, but our Python package accepts these parameters in model.predict().

From our docs:

model.predict("your_image.jpg", confidence=40, overlap=30)

To learn more, refer to our docs here to see how to infer with the Python SDK.

I hope this resolves your issue.

Sorry to say this, but does not change anything in the results. I don’t understand why I have this problem using the same model in both situations.

Even if I don’t set the parameters, the result in both situations is still different, with the default value of confidence=40 and overlap=30.

I already read a lot in roboflow documentation, thinking that some text could explain this, but I didn’t find it.