Trained dataset performs worse even though stats are better?

I have a pre-annotated custom dataset I’ve uploaded to Roboflow.
Using the free tier, I’ve trained it using the Fast method twice. One just using the un-augmented annotated images. Another with augmentations like Hue, Saturation, Brightness, Bounding Box, and a few others.
The training results are:
Unaugmented training on 113 images:
mAP: 28.1%
Precision: 39.7%
Recall: 36.3%

Augmented version with 113 images boosted to 491 images:
mAP:67.0%
Precision:45.4%
Recall: 93.8%

Testing the un-augmented version works pretty well for such low stats. It IDs the objects(different types of cactus) in random images in the dataset quite well, although it’s kind of sketchy on images outside the training set. Shrug, okay. I’m using the fast version on the free tier, so what do I expect, right ?

But the version that is supposed to have trained more on different filters and conditions and is reporting better post-training stats is completely failing. Testing it on any images either inside or outside the training set yields “No Detections”. I’ve tried adding/deleting augmentations and retraining with the same results.

I have a good bit of experience with basic CV and machine-learning, but I’m pretty new to annotated object-recognition systems.
I’m trying to understand what’s going on here in Roboflow. I’d think that if it were a case of overfitting, there would be some images in the training set that would be giving stellar matching results, but it doesn’t seem like ANYTHING is working on the supposedly “Better” trained version.

Hi,

What preprocessing and augmentation features did you use?

Additionally, are you sampling false or non-detections and adding those images back to your dataset to relabel and retrain? Implementing Active Learning - Roboflow

Both versions use Auto-Orient, Isolate Objects and Resize.
The first version has no Augmentations.
The 2nd one uses Hue, Saturation, Brightness and Exposure.
I’ll go look at the link for Active Learning.
Thanks!

You will want to remove Isolate Objects as a preprocessing step.

That feature is intended for use when converting object detection projects to classification projects.

When you retrain your model from there, let me know what you get for training results and test results.

Removal of Isolate Objects should help a lot, and Active Learning will too.

And you’re welcome!

Cool!
I’ll try that and let you know what happens.
Thanks!

Sounds good! And you’re welcome!