YoloV8 annotation format for object detection

Hello everyone,

I’m trying to learn YoloV8 annotation syntax in order to build a tool for object detection model and here is what I got :

The format is supposed to be classId,centerX,centerY,width,height

but the thing is when I export a dataset from roboflow as TXT Format for YoloV8, I get way more values than expected :

I tried to identify one of the values using scales, and it appears that, at some point, the first one corresponds to a coordinate, but not the centerX. More like a point which is used to create a border.

I discussed a bit with Roboflow AI before posting this and it said that YoloV8 is having such format to create segmentatino model and all… and the format I got was indeed YoloV5’s one.

I still want to use YoloV8, so I would like to know if any of you already tried to parse such values, and if not, at least to know what they are supposed to represent, and the position of each field.

Thanks in advance, best regards.

Are you exporting from an object detection project or an instance segmentation project?

So I just checked the kind of project, and it says “Object Detection”. + I don’t have any choice excepted the format of which I want to export my dataset, such as YoloV8, YoloV5 Pytorch, etc… So I assume it’s only Object Detection, which should contain all this informations.

It appears though that some of my polygon contain more than 4 points. Is that a problem ?

When you click export, you will see an option to pick the format type. Scroll down to pick Yolo8

Well the issue is that the data I posted on my first post comes from a YoloV8 object detection export.

As I mentionned in one of my reply, some of my shape have more points than 4, is that a problem, even though the project is Object Detection and the format is YoloV8 ?

Do you mind adding Roboflow support (via the Invite page) to your workspace so I can take a closer look?

There is a checked box saying “Grand access” along side with “Roboflow Support”, so I think you already can access the repo.

servers and rack detection - v11 2023-10-23 3:38pm (roboflow.com)

Ah - this is because you are using polygon annotations. Our native train will handle the distinction and train an object detection dataset; otherwise you’ll want to do the conversion on your own if you want to train outside of our platform. Here’s a good resource: Ultimate Guide to Converting Bounding Boxes, Masks and Polygons

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.