Best type of project to get a diagnostic of the features of bone's fractures of dogs?

Would be better the classication or object detection type project?

For object detection I thought to make the bounding boxes like this for the future prediction of labels based on this three features:

Yellow mark: Type of bone
Green mark: Ubication of the fracture in the bone
Red mark: Fracture type

Do you think this would be optimal or is there a better way to make the bounding boxes?

  • Project Type: Diagnosis of radiologic images from bone’s fractures of dogs
  • Operating System & Browser: Windows 11 / Brave.
1 Like

Hi @Matias_Garay!
First off, welcome to the Roboflow community! This is a great question and this is a perfect use case for workflows!

I suggest creating a workflow with the following logic:

  1. Detect the bones with an object detection model.
  2. Filter out detections from the model to only process the bones with fractures.
  3. Crop to the bone with a fracture.
  4. Detect the fracture type with an object detection model.
  5. With a separate branch, classify the fractured bone with a classification model.
  6. Return the results as JSON for use in an application.

Here is a wonderful blog post that walks through a similar workflow. Happy building!

Amazing Ford! And thank you so much for your welcoming, I appreciate it.

So, from what I understood your suggestion would be like this:
There will be three models.
The first, an object detection model: to mark the bones with fractures.
The second, another for object detection: to mark the type of fracture.
Or these two labels could be in just one object detection model?
The third one would be a classication model to classify the third label, the ubication of the fracture?

This all would work together in a workflow. Also a workflow can become a new model to train the dataset?
The third point, with crop you mean to put the polygons or another thing?
Could you clarify what labels are going to detect every model based on the three marks I said earlier?

Hi @Matias_Garay!
Great questions! For the object detection, you could use one or two models. I suggest starting with one model trained to detect fractured bones, with each detection labeled as a specific bone type.

The workflow is where you will implement your models to create your vision application, not to make a new model.

When I say “crop”, I’m referring to the dynamic crop workflow block. This block allows you to crop an image based on the detections of a detection model.

I see, Ford. I understand how the workflow space works.
For object detection models, what would you suggest for annotations?
The first could be bounding boxes to label bones, and the second to label fracture type, which could be polygons to annotate the irregular shapes some fractures have.
Or would it be better to use polygons for bone type and then bounding boxes for fracture type?

Thank you again for the support.

Hi @Matias_Garay!
Great question! It all depends on how accurate you want to be with the annotations.

Here is a great article explaining the difference between bounding boxes and polygons: Improve Accuracy: Polygon Annotations for Object Detection