- Project type: Object detection.
- Windows 10, Chrome browser.
My goal is to do object detection and I have a dataset of 1000 photos. In order not to create bounding boxes manually for all the photos, I made 100 and followed this tutorial:
Everything went right except that the labels that were created have very little information inside and obviously, when I dragged the images and labels, the bounding boxes are not there. The format of the final json files is this for example:
There is only photo resolution. The roboflow project is this:
plum quality classification Object Detection Dataset and Pre-Trained Model by myworkspace. I have done some pre-processing (resize, flip and augmentation (x3)).
how come there is nothing else? it doesn’t work with the label assist tool either.
thank you all
Sorry to hear you’re having issues.
Is this a annotation file or is this a inference result? It looks like a result from Roboflow’s hosted inference API which the Annotate API uses to annotate new images using a previously trained model.
Do you have a trained model ready?
Also, if you wouldn’t mind, could you share the code snippet you’re using that produced that result? (Remember to remove your private Roboflow API key if applicable)
Hi @leo and thanks for asking. Yes, I followed Peter’s tutorial from Roboflow which is in the “Annotate API” link I entered. I also train the model to Roboflow, but when creating the tags, if I drag the images and labels at the same time to the “upload” section of Roboflow, I only see the photos without any tags generated. I also tried using the Label Assist and selected my project and template, but it doesn’t automatically generate the Bounding Boxes like in Peter’s video. The code I used is this:
git clone email@example.com:roboflow/auto-annotate.git
python3 -m venv venv
pip install -e .
export ROBOFLOW_API_KEY= …
python -m a2.annotate
It sounds like your model might not be making predictions confidently quite yet.
Have you tried messing around with the confidence on Label Assist? Did that help at all?
Can you try inferring on your images normally through Roboflow? (Either through the Deploy tab or the Deploy Your Model section under each trained version) Do any of your expected predictions show up there?
Hi @leo. I have tried to play with the confidence but doesn’t work. It doesn’t detect anything. I have tried also inferring on my image but I haven’t a prediction. In my case, all prune are bad and i have only one class.
I think, in that case, you don’t have enough data. I tried your model on Universe with a sample from your test dataset and it did show up when I was down to 1% confidence. This is the reason your label assist and inference weren’t working.
Also, looking at your dataset, you might be better off with an image classification model. It seems that your images always only have one plum and you have one label you want to assign to each image. It might be worth looking through this blog post to see if classification is the way to go for you.
Otherwise, I think you need more images to train with. One way to easily create more training data is to add more augmentations. I saw you have horizontal and vertical flip, which is a good start, but adding more types of augmentations to add variety to your dataset will likely help your model perform better and avoid overfitting.
Hope this helps.
So, the main reason is that there aren’t enough photos and therefore not enough bounding boxes for it to leverage Roboflow’s inference API to automatically annotate images. I know image classification is more appropriate but I am using this approach because I need to compare in a project at university how object detection with yolo and image classification works with other neural networks.
OK, thank you and consider the issue resolved.