I’m currently working on a logistics-related project, the aim of which is to automate parcel transport by using volumetric weight. I need to use a model that takes images as input and extracts the dimensions of the objects on the image (length, width and height) to then calculate the volumetric weight.
I’m new to Roboflow, so I’d like to know how I can use Roboflow to retrieve the dimensions of objects in an image?
Hi @mamady1999 , thank you for considering Roboflow Inference!
Inference Workflows would be perfect to achieve your task. We currently don’t have block that would calculate volumetric weight, however you could experiment with a block that reduces contour to contain requested number of vertices.
You can experiment with any public model or by training your own model to detect parcels and then using that model in workflow builder in https://app.roboflow.com; I found this model in Roboflow Universe
Based on above model and contour reducer idea I was able to quickly build a workflow like the one below:
I uploaded below photo (found on google):
Above workflow detects box and then it finds 7 vertices like below:
"dynamic_zone": [
[
[
1232,
304
],
[
384,
822
],
[
515,
1260
],
[
934,
1619
],
[
1580,
979
],
[
1699,
572
],
[
1654,
502
]
]
Based on vertices you can get edges (vertices are ordered to form contour) you can then pick 3 edges that will represent w/h/l, I think you can calculate volumetric weight based on this.
Please give it a try, I’ll be happy to assist if you have any questions about the platform.
Many thanks!
Grzegorz
Thank you very much for your support, I will try this approach.
In this case, will I need reference objects to convert pixel dimensions into real dimensions?
Hi @mamady1999,
Yes, and the best approach would be to train the model which detects parcels as well as reference object (i.e. scale). You can then modify workflow to provide dimensions of reference object in pixels.
Is there a workaround to using a reference object, because most of the solutions I’ve seen emphasize this approach.
Hi @mamady1999 ,
Below are my thoughts on your question, my intention is to give you more ideas however keep in mind I have not built workflow like this before so I cannot guarantee accuracy.
If reference object is not an option, I’d go for declaring extra parameters storing pixels to millimeters ratio. I’d experiment with using one ratio for dimensions parallel to floor (i.e. width and height), and another ratio for dimension perpendicular to floor (i.e. depth). These ratios could be measured while commissioning camera (imagine capturing frame from the camera with ruler laying on the floor, and another with ruler perpendicular to floor). It would probably be good idea to use perspective converter block to stabilize that ratio around wider area within field of view (in which case measurement should be performed on frame with perspective correction applied). I’d imagine it would be important for the camera field of view not to change once commissioned, and also resolution of stream provided by camera should not change (some cameras can provide stream in configurable resolutions).
Hope this helps,
Grzegorz
Thank you very much for your help, because thanks to your help, I’ve made a lot of progress. Thank you so much!
This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.