Different objects for the same class

Does the model gets a worse performance when you annotate different objects with the same class, although the type of class is similar but can be interpreted in different objets from the images?

It might help if you provide more detail. There could be lots of ways to get around this. One thing you could be saying is you have a class like “red_things" and then you annotate a fire hydrant and an apple. If you annotated enough, it would probably find the pattern of red. But in this example, there are ways to just find lots of red pixels. Or, if it’s just fire hydrants and apples and absolutely nothing else, it would take less annotating to just build a "fire hydrant” class and an “apple” class, train to detect both items, and then rename the classes to “red_things” when you go to work with the object detections.

Maybe that helps, or drop some examples and others might have some good ideas for you!

Sure. I’m working on a project about fractures. I trained a model for fracture types. The classes I want to look for are offset blunt fracture (pink box) and displaced fracture (blue box). The problem with the displaced fracture is that there are few images with that fracture, and I got low stats for the validation and test set . But both can be interpreted as displaced blunt fractures; there’s also a class that is a displaced point fracture. You can see in the images that the patterns for the displaced fracture vary a lot. So, would the detection probability increase if I put everything on the offset blunt fracture, since there are more images available? Because from what I see, to detect an object, the models have to find similar patterns in the pixels, so I think putting everything in one label won’t change much.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.