Amazing Ford! And thank you so much for your welcoming, I appreciate it.
So, from what I understood your suggestion would be like this:
There will be three models.
The first, an object detection model: to mark the bones with fractures.
The second, another for object detection: to mark the type of fracture.
Or these two labels could be in just one object detection model?
The third one would be a classication model to classify the third label, the ubication of the fracture?
This all would work together in a workflow. Also a workflow can become a new model to train the dataset?
The third point, with crop you mean to put the polygons or another thing?
Could you clarify what labels are going to detect every model based on the three marks I said earlier?
Hi @Matias_Garay!
Great questions! For the object detection, you could use one or two models. I suggest starting with one model trained to detect fractured bones, with each detection labeled as a specific bone type.
The workflow is where you will implement your models to create your vision application, not to make a new model.
When I say “crop”, I’m referring to the dynamic crop workflow block. This block allows you to crop an image based on the detections of a detection model.
I see, Ford. I understand how the workflow space works.
For object detection models, what would you suggest for annotations?
The first could be bounding boxes to label bones, and the second to label fracture type, which could be polygons to annotate the irregular shapes some fractures have.
Or would it be better to use polygons for bone type and then bounding boxes for fracture type?