Can an object detector learn a position instead of an object?

  1. Project type (Object detection)

I’m trying to see if a 2-stage detector is helpful for my use case. I have a “region” with some areas of interest (which will be sent through the second stage detector).

So the question is, can the neural net learn the “position” of the areas of interest, or should I only rely on it to learn the region itself and then do some math to manually try to get the areas I want from the region?

You can see what I’m trying here:

  1. Sign in to Roboflow
  2. Sign in to Roboflow

^ two different variants, maybe one is easier to train than the other?

Hi @vshesh - Object Detection is for both classification and localization (position) of the detected object.

The neural net would output the positions based on where it locates the objects the interest during inference.

What you are attempting to do would require passing the image pixels for the detected objects from the first model to the second model.

Here is the JSON response object format