I’m working on a pool ball/billiard ball detection app.
I am having issues with the annotation boxes being to big it reads the balls correctly and accurately but when I have more then one pool ball on the table there’s only one annotation box and the box is as big as the screen.
I feel like this is something super simple that I am looking over.
I’m using an oak-D camera and can not seem to find what the issue is.
Sounds like an interesting issue. Could you tell me the specific workspace and project you are working on so I can help you better? If you have any screenshots as well, that might be helpful.
Is it possible you are on an image classification project, instead of an object detection project?
On image classification, the bounding boxes look like they are as big as the screen, possibly like you described.
The workspace associated with my project is “ben-gann-lscqy,” and the specific project is named “pool-balls-qlnkb.”
Currently, my model is getting 99.5 MAP, 97.9 precision, and 100 recall
Any insights or suggestions you can offer to resolve this issue would be greatly appreciated.
I’ve got some updates and a couple of issues where I could use your help.
First, the good news: the system can now spot up to 3-4 pool balls. But here’s the kicker: it keeps thinking most balls are ‘4’ or ‘7’, even if they’re not(All annotations are correct and all images have been annotated) . Got any ideas on how to fix this?
I am also wanting to live stream this on a Google Cloud server to use on an Swift project. So I don’t know if that would change anything.
Thanks for the info. I’ve taken a look at your project, and it looks really interesting.
However, I have some ideas about how to improve your project.
- In almost all cases, the best training data for models are those that best resemble the data that you’ll infer with.
- I see that you have a lot of training images, which is great. Your mAP, precision, and recall are also great, but you mostly have images of one ball on a white background. While good for a model detecting what type an individual pool ball is, it makes it more difficult for the model to detect them in any other context.
- In your case, if you want to detect pool balls on a pool table, you should add images with pool balls on a pool table.
- I think two specific tools will be very useful for your use case. The annotation heatmap and the histogram of object count in Roboflow’s Health Check.
- The annotation heatmap can tell you where your annotations occur most frequently. Models learn from patterns (even if they aren’t relevant to your use case), so if the location isn’t relevant to detecting the object, you should have them spread out as much as possible
- Your annotation heatmap at the moment:
- The histogram of object counts can tell you how many objects an image usually has.
- Your histogram at the moment:
- This is most likely the reason your model was only detecting one ball.
Hope this helps. It really does sound like a fun project and we’d love for you to share it with us when you’re done.
Thank you for your response. I appreciate your help and guidance in addressing my question. I’ll be sure to keep you updated on the progress.