Inference on multiple models concurrently seems to not work
- Project type: Object detection
- Operating System & Browser: Ubuntu
Hi everyone!
So for my project I’m training four RF-DETR versions on the same images but with different annotations to compare them.
When visually comparing I’d like to use inference on each model and overlay the predictions together with the ground truths on the same image plot with supervision, color coded by model. When run on a single image it works perfectly.
When doing this for multiple images in a row however the inference only seems to work on the first image, since all following plots only has the ground truths plotted.
I know that it is not a prediction problem since I have verified that one of the later images in my list does in fact produce predictions upon inference when done by itself.
Relevant code as follows:
for image in images:
image = cv2.imread(os.path.join(data_dir, image["file_name"]))
# annotate the image with our ground truth results
bounding_box_annotator = sv.BoxAnnotator(thickness=1, color=sv.Color.WHITE)
image_annotations = annotations_by_image[image["id"]]
gt_boxes = []
for annotation in image_annotations:
x, y, w, h = annotation["bbox"]
gt_boxes.append([x, y, x+w, y+h])
gt_boxes = np.array(gt_boxes)
gt = sv.Detections(
xyxy=gt_boxes,
class_id=np.full(len(gt_boxes), 2),
data={'class_name': np.full(len(gt_boxes), 'GT', dtype='<U6')}
)
annotated_image = bounding_box_annotator.annotate(scene=image, detections=gt)
# run inference on the models (it's a zip of (range(num_models), str{custom name}, model{from get_model})
for i, model_name, model in models:
bounding_box_annotator = sv.BoxAnnotator(thickness=1, color=colors[i])
results = model.infer(image, confidence=0.3)[0]
detections = sv.Detections.from_inference(results)
# annotate the image with our inference results
annotated_image = bounding_box_annotator.annotate(scene=image, detections=detections)
sv.plot_image(annotated_image)
Is there some limit on running inference on multiple models simultaneously on roboflow with the inference package? Or is it possibly a bug (either in my code or the package). Any help to understand the api better and also suggestions on how to fix this would be helpful.