I have a dataset that consist of 9k images of human and ball classes which i trained using YOLO_v8. The model currently is instance detection with bounding boxes so when I inference an image i get results as boxes around the detection’s.
My project now requires me inference images and provide segmentation results of the actual humans and ball.
Rather than use the smart polygon tool to go through each image manually, is there a way I can use any ai tech to automatically convert the dataset to smart polygons? instead of having to do this manually?
As an example, i need this kind of image to be uploaded:
and the results to be like this:
a transparent png image with only the detections.
Currently i use opencv to remove the greens from the bounding boxes however when used in multi sports like basketball this isnt good as the humans and ball may be on different colour floors.