Hi there,
I am using WORKFLOW to extract the sizes of the instances detected by the model.
However, the values exported by model regarding sizes don’t make sense (they are supposed to be pixels, right??). The numbers are way higher than expected, and what is odd is that they don’t seem to follow a particular proportion.
I am analysing and image 853x640, the total of the parts in the prediction should be less than half, and its about the same as the total size (so, way off, is not a small error).
Could be that the size property is not the area of the polygon in pixels? but a bounding box or similar?
I am just a begginer here, just let me know if you need any links on my side.
Hey Edddy! Great question. And you’re right, it is kind of hard to know what you are facing since we can’t see the workflow. But I ran one and the JSON output is below. It does generate a width and height based off of the detection, not an area. And you can see my W x H of 193 x 235 is much smaller than the whole image of 2048 x 1542 (as I would expect). Is that screenshot similar to what you are seeing for an output?
I think is not exactly the same, I am using the sizes from the “property definition” block.
Which only gives one value in pixels (not hight and width). In the end, is just a matter of how you define the concept “size” hehhhehe.
If that is the case, since is detecting many object thoughtout the image, the overlapping of the “bounding boxes” is what produces a total area higher than the total of the image. Makes sense, I just wanted to confirm.