Challenges with Detection Stitch

  • Project Type: Workflow for segmentation
  • Operating System & Browser: Mac OS 15.6.1 & Google Chrome
  • Project Universe Link or Workspace/Project ID: Link to workflow

Hello,

I will preface by saying that I am an amateur with coding and Roboflow. My ultimate goal is to train a model that can help me segment and characterize the size/shape of particles in a microscope image for my graduate research. I am still trying to train my model, but it seems like an arduous task to draw polygons around thousands of objects.

My new strategy is to make a workflow that will help me annotate images that I can use to train my model. To do this, I am trying to segment images with the SAHI method to recognize large objects and very fine features (~10x10 pixels). I initially slice the images then apply the model that I trained. However, every time I try to stitch my images back together, I get the following message: “Dimensions for crops is expected to be saved in data key parent_dimensions of sv.Detections, but could not be found. Probably block producing sv.Detections lack this part of implementation or has a bug.’ Am I missing something?

This is an example image that I am trying to segment. If somebody has a suggestion for a more efficient way to annotate all of the particles below, I would greatly appreciate that too!

You’re on the right track! Slicing and detecting is a good strategy for small detections. For your Detections Stitch, double check that it is using the “Input>Image” as the Reference Image (and not the Image Slicer images.) That could be one issue.

But then as you suggest, your main question/goal is about how to annotate all those.

I tried the Auto-Label function (I’m guessing you did as well) to create an initial set of annotations with segmentation. I found it was only getting about 75% of the detections as you can see below. So that would generate a model to help, but probably not as great as you hoped given all the annotating you have to do.

If you really want to minimize annotating work, I wonder if you could switch strategies and do an object detection model and then build a classification model to run against those individual detections. So it could decide if it’s an air bubble vs a pollen grain. (Or maybe even a segmentation model.) I threw just 10 test images into their site https://rapid.roboflow.com/ and when I moved the sensitivity slider I got almost all the detections. You can then manually fix the few that are missed or wrong as sometimes it groups 3 or 4 as a single detection. (Note, you need to “Find Objects”, move the slider, then “Approve” and THEN you can still do edits to make it 100% accurate. Be sure to do this for every image before the “Review Model” step.)

Then you generate a model on that. Now you’ve quickly isolated all the items on the slide image and can do what you need from there (including stitch back to the original image).