YOLOv8 Model Detecting Background Objects as Hand Gestures

Hello Roboflow community,
My name is Philip and I’m working on a hand gesture recognition model using YOLO for Sign Language. However, I’m facing an issue where the trained model sometimes detects my face or random background objects as a hand gesture. I carried out annotation of my hand gesture datasets (I only annotated 2 classes: A and B; cause I wanted to see how it works) on roboflow and trained the model using google colab. I also used 100 images for each of these classes.

Here is what I’ve done:
I’ve tried adjusting confidence & IoU thresholds but the issue persists.
Increased training epochs to 100.
Used bounding boxes and hand landmarks for annotation.

Project Type: Object Detection

Model Used: YOLOv8
Operating System & Browser: Windows 10 and Chrome
Project Universe Link or Workspace/Project ID: Serious Object Detection Dataset by Example

Below is an example of what I meant.

I’d really appreciate any advice on resolving this issue. Thanks in advance!

Hey Ufedo! I went to browse your images but they were no longer available. So I can only guess here, but one option is to make sure about 10% of your images are “null” images - no hand at all, only your face and the background. You can’t just leave them unannotated either - you have to actually mark them as null (Dataset > (check the box for each null image) > Actions > Mark Null. This will help it learn not to detect your face and other background items. Best of luck!

Ok thank you sir, I will try that.