Hi all togehter!
I have a 3d object, several to be precise, and I want to find out which direction each of them is facing.
In addition, the aim is to select the same object as a hand, as another image.
Is it smartest to label the objects for each hand position and then proceed as object detection, or are there simple, less complicated variants / methods?
Thank you very much!
Hi @pagan-parlor-tag - it’s a bit hard to follow the question. Do you mind sharing some example images (or even a drawing)?
I could see keypoint being useful - use a two point skeleton: one point for front
and one for back
. Would that work?
Hey!
Thanks for your response!
I am sorry, that my description was very bad.
My pictures look like this:
(The finger should show in the direction, where the object is facing)
Have a nice day!
i
Keypoints going to do this easily for you. You’ll want a different class for each object, and a
front
and
back
skeleton point to label
Hi!
Thanks for your reply! And great approach.
I created new classes for each object.
Also: I created another class for the hand, correct?
These are some examples of my labeling: EXAMPLE LABLES - Album on Imgur
Is this good? (Especially in this armchair object which’s direction is to the top (last object)
Thanks a lot!
That works - just make sure that there are two classes for the skeleton points, front
and back
.
You’ll then need to do some angle math, but shouldn’t be too bad.
Okay, great!
Do you mean two skeleton points for the classes?
Like this, right?
Yup! You should be good to go
1 Like
Perfect!
Thank you very much for your assistance!
1 Like
Hi, I ran in one issue after labeling:
When using YOLO pose estimation, the name of the skeleton point will not be returned.
And it seems like I did the order of creating the skeleton points wrong for one of 13 classes and now it is “upside down”.
I hope it is understandable what I mean. So the front and back are reversed in the results.
How can I fix this? Do I have to create the class again and completely relabel it? Thank you very much for your help. I can also upload pictures again if required!
Photos would be useful - did you do the training in Roboflow or outside the platform?
Okay, so:
I’m using YOLOv8-pose external, on my PC with a GPU.
The keypoints are then output by the model.predict()[0].to_json(), but these keypoints are output WITHOUT names, but like this:
“keypoints": {
“x": [
755.3491821289062,
645.9105834960938
],
“y": [
113.75042724609375,
106.71204376220703
],
So I can’t tell the two keypoints apart as to which is which. Fortunately, for all but one class, it’s always start first and then end.
They are also displayed graphically here are the examples:
(Yellow: front ;red: back)
Exactly with this one: The color / direction is reversed:
I hope it has now become clearer what my problem is.
Do you know how I can fix this?
Because it seems to be relevant in roboflow which keypoint was created first, this is then named first in the listing and for exactly one class I did it the wrong way round when creating it. That is my assumption.
Hi @pagan-parlor-tag, we can’t provide support for models trained and deployed outside of the Roboflow environment. I recommend:
- Training in Roboflow and deploying through the hosted API or self-hosting with inference.
- Uploading the model to Roboflow (docs) and using our deployment methods from there.