Sawtooth/jagged diagonal line when using model for label assist

I’m attempting to build a dataset and workflow to identify each game cartridge in an image. (input.jpg)

My Current Problem

My first step is labeling sufficient samples of each cartridge to be identified.

I’ve trained a model with a relatively small dataset (around 100 images + augmentations) of manually annotated images, to annotate the different areas of cartridges. (classes: cartridge, label, logo, art, id) (Model Type: Roboflow 3.0 Instance Segmentation (Fast))

I’m using that model to use Label Assist to annotate a much larger dataset. (Type: Object Detection)

When I’m annotating a cartridge that is square in the frame and has horizontal/vertical lines it has a pretty good annotation suggestion with straight lines. (good.jpg)

If the cartridge is skewed or relatively small in the larger image, then the suggested annotations are mostly correct, but have very jagged, sawtooth, or zig zag looking lines. (zigzag.jpg)

Why is it this way? I want simple, straight lines, for my annotations what can I do to have that?

My Overall Direction:

My current thoughts that my steps are like this, I welcome any feedback on how data should be annotated, or what type of models should be used.

Some challenges I’m aware of that I need to plan for:

  • I expect around 5,000 classes.
  • The source image may have a single cartridge or a large grid with 60+ cartridges.

Steps:

  • Annotate CartComponents
  • Train CartComponents model to simplify/speed up annotation
  • Annotate cartridge labels with unique class per cartridgeID
  • Train CartidgeIdentifier model (with 1,000’s of classes) to identify a cartridge
  • User Application will take input image and use CartComponents to detect cartridges
  • User Application will crop each cartridge individually from the image
  • User Application will feed individual cartridge image to CartidgeIdentifier model to identify

Project Type: Object Detection
Operating System & Browser: MacOS, Safari

An example of the model detecting straight lines:

An example of zigzag lines on a small item:

An example of the model drawing zigzag lines on a high res tilted item:

Hey zevlag! One really quick option - switch to object detection so you get a bounding box instead of tight fit lines. Of course, this will give a vertical fit when the game is at an angle.

Another option you have is to just move ahead with the jagged lines and fix that issue later. Here’s that explanation.

I ran a quick example that found part of a playing card, but as you can see, I also have the jagged output of the instance segmentation model (that I borrowed from Roboflow Universe). It’s generating many points around the predicted area.

But then I used Workflows to run the model AND added a Dynamic Zone block as a processing step after the model finds the instance, and here I can specify I only want 4 corners. Here’s what the workflow looks like.

And now here’s what I have for an identified zone.

So you can see I now have 4 straight lines (only looking jagged because of the resolution.) And you can output the coordinates if you need the actual numbers or…

…you can also just do a crop immediately and only capture that general part of the image. And you could throw another model detection into the workflow to only run on that cropped piece.

I tried to throw out a few options there. There are additional variations on all that depending on your needs. If you are still needing the annotations to have straight lines from the beginning, maybe expand on why that’s important and myself or others might be able to chime in on more solutions!