Why are there differences between the annotations generated in roboflow’s label assist and those generated in a python script? Of course, both are using the same model, with the same parameters:
confidence = 60%
overlap = 30%
My project is about Object Detection.
We can see the difference in the image below:
On the left is the annotation generated by the script, with 14 bounding box, some of them overlapping.
On the right is the annotation generated by the label assist, with only 6 bounding box and none of them are overlapping.
I just would like to get the result of the right using the script.
My code is shown below:
# Initialize the Roboflow object with your API key
rf = Roboflow(api_key="MY_API_KEY")
# Retrieve your current workspace and project name
workspace = rf.workspace()
# Specify the project for upload
project = workspace.project("tdm")
# Import the model from the project
n_version = 12
model = project.version(n_version).model
model.confidence = 60
model.overlap = 30
character_id = 0
n_pages = len(os.listdir(cbs[character_id]['path']))
for (i, file_name) in enumerate(os.listdir(cbs[character_id]['path'])):
tags = []
if i == 0:
tags.append('cover')
elif i == n_pages-1:
tags.append('comic_strip')
image_path = cbs[character_id]['path'] + file_name
annotation_path = cbs[character_id]['path'] + 'Annotation/' + file_name.split('.')[0] + '.json'
json_annotation = json.dumps(model.predict(image_path).json(), indent=4)
with open(annotation_path, "w") as outfile:
outfile.write(json_annotation)
project.upload(
image_path = image_path,
annotation_path = annotation_path,
batch_name = cbs[character_id]['batch_name'],
tag_names = tags,
sequence_number = i+1,
sequence_size = n_pages+1,
is_prediction = True,
#split="train",
num_retry_uploads=2
)
I really don’t understand the problem…