How to generate classes to map sharks on 3D space and instance segmentation for specific parts

I’m working on a research project to identify sharks. To do so, I’m training a dataset to be able to extract the shark from image, and to extract specific parts to enable the identification of each individual.

Researching on Roboflow blog posts, youtube reference and documentation I couldn’t find a reference to my questions and points.

  1. Since shark is into a 3d universe, what is the best way to divide my classes since there is no way to add attributes like: front, left side, right side, back, up, bellow. What is your recommendation to the dataset training?

  2. I have a use case that I need to extract two parts of the shark to run a local feature matching, I was able to create a shark class that is capable of return the whole shark, I tried to create a new class called pigmentation_head, after annotating 244 images on the dataset, this new class is not being mapped. What is your suggestion to map the two ROI in the figure bellow:

Thank you

Best regards

  • Project Type: Instance Segmentation
  • Operating System & Browser: MacOs & Chrome
  • Project Universe Link or Workspace/Project ID: tiger-shark-wpcdu

Hey @raphaelgatti ! Some thoughts I had for this:

  1. I think I’d want to know what classes you are planning to create. If you literally want a class called “front”, training a model to identify the head should work well. Left side is more difficult, but from the top it would always be left of the top fin and from the bottom it would be right of the belly so a model would likely pick up those patterns with enough annotations
  2. I think your best bet is to include some of the fins or other defining features in those annotations. That way the model has some better anchors to look for in the image. (This is assuming those pigmentation areas are always in the same position relative to the fins.)

Couple ideas to get you started maybe. Cool project!

Hi @Automatez, thanks for the ideas.

  1. I started with the creation only of the shark class, mapping as much as different possible positions of the shark. And here is related to the next steps on the improvements of training, because there are some positions where the ROI I’m interested in are not available, then I came with the idea to split the shark class into multiple classes according to the shark position, for instance if I create and train the shark_front class, I can know that I won’t have those ROIs I’m first trying to assess. My question is based on more experienced models, if is this a good approach to split the classes into subsets? I shared bellow some examples to illustrate.

  2. Now I understood why probably I’m not getting the results, since there are no anchor points, on the examples shared bellow, you can see that I tried to extract the front part of the shark but there is no relevante anchors for the model be able to map that area. That’s why I’m not getting any results from this class training.

Note bellow that I have two classes / masks, one to map the whole shark, that is working fine. The pink polygon is the ROI that I’m trying to extract, my idea was to train the multiple possible positions to then normalize the perspective and apply LightGlue model to extract the ID of each shark.

Thanks,
Best regards

More examples of the dataset training.

  1. Ah yes, that’s a good idea to separate “shark_front” and know it’s not useful. I do think with enough images that would be an identifiable class for the model.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.