Object Detection Engine

Hi,

for an app I need to recognize confined rooms and specific objects (in my list) if they exist and in which quantity found in each room we detect from stream, photo or video.

Objects being listed in my file (table, chair, coffee maker … ca. 200 items). So want to output in the way that if possible rooms and objects are identified and returned as list array …

room 1 : 1 bed, 2 chairs, 2 pillows, 2 blankets, 1 carpet

room 2 : 2 beds, 2 tables, 1 chair, …

so i then process such list in my app.

Question: Is there a well trained model around that I can engage, that already has widest knowledge of objects typically found in civilization on planet earth ?

Or is that something I would have to create by myself ? And if so, is there a model available that can process object detection in provided photos and saves cropped image portion of that object along with annotation ?

Also, how do I assign existing annotations to my image folder upload to my roboflow project ? When I have large quantities of images and can put the corresponding annotations in a file, how do I bring them together ? I once saw an xml file instruction on roboflow but it didn’t explain to the end for me … and rt now Im not finding it back in the search etc. browsing the pages.

Regards Frank

Hi @Frank_Hassani

I derived four questions from your post and I’ll answer them in order:

Is it possible to get a quantity per class result?
Yes, but it’ll require a bit of code to turn the API result from a list of each inference annotation to a list of quantities for each label.

What is the most general knowledge object detection model?
I believe that would be models trained on Microsoft’s COCO dataset, which you can check out on Universe. That being said, these general knowledge models almost always underperform compared to task-specific models, because the more it’s generalized the more it needs to learn.

I would strongly suggest finding or creating a more specialized, specific dataset. (maybe just household items, from what I can tell in your example) Try searching on Universe, or you can always create your own.

Is it possible to crop the detected bounding box (predicted area)?
Yes, but again with a bit of code. You can use the x, y, width and height information from our API result to crop the original image submitted to the API. The exact process will depend on the language, but you can do so with PIL on Python, for example.

How do I upload annotations to Roboflow?
You can use the same process you use for uploading your images. Just drag and drop them or select them in your project’s upload page, then it’ll apply the annotations to your images automatically.

I hope these answers help you with your project.