I have a 3d object, several to be precise, and I want to find out which direction each of them is facing.
In addition, the aim is to select the same object as a hand, as another image.
Is it smartest to label the objects for each hand position and then proceed as object detection, or are there simple, less complicated variants / methods?
Hi!
Thanks for your reply! And great approach.
I created new classes for each object.
Also: I created another class for the hand, correct?
These are some examples of my labeling: EXAMPLE LABLES - Album on Imgur
Is this good? (Especially in this armchair object which’s direction is to the top (last object)
Thanks a lot!
Hi, I ran in one issue after labeling:
When using YOLO pose estimation, the name of the skeleton point will not be returned.
And it seems like I did the order of creating the skeleton points wrong for one of 13 classes and now it is “upside down”.
I hope it is understandable what I mean. So the front and back are reversed in the results.
How can I fix this? Do I have to create the class again and completely relabel it? Thank you very much for your help. I can also upload pictures again if required!
I’m using YOLOv8-pose external, on my PC with a GPU.
The keypoints are then output by the model.predict()[0].to_json(), but these keypoints are output WITHOUT names, but like this:
So I can’t tell the two keypoints apart as to which is which. Fortunately, for all but one class, it’s always start first and then end.
They are also displayed graphically here are the examples:
(Yellow: front ;red: back)
I hope it has now become clearer what my problem is.
Do you know how I can fix this?
Because it seems to be relevant in roboflow which keypoint was created first, this is then named first in the listing and for exactly one class I did it the wrong way round when creating it. That is my assumption.
Okay, got it. I managed to fix it luckily!
I got one further question: I got like 2 objects: armchair and a car which seem to be often confused by the AI, also the training data says that.
How to prevent that in general?
Do I need to add more training pictures of these 2 objects? Should I increase or decrease the epochs?
I would filter your data by class to see if you have any mis-labeled images. Given you double-check everything I would then add some more images of those two classes.