Playing Card Accuracy

I am experimenting with some of the existing playing card models, and I have to admit that I’m a bit shocked at the lack of accuracy, to the point that I feel like I must be doing something wrong. Some cards do okay, but things like Aces sometimes come up as 4’s. And other cards aren’t all that accurate, either. There is no way I can use this in the real world if the accuracy is going to be that bad, even with over 20k images in the dataset.

  • Project Type: Card Recognition
  • Operating System & Browser: iOS
  • Project Universe Link or Workspace/Project ID: poker-x1s5t/1, playing-cards-n7nhw/1

I looked at your poker-x1s5t example. It’s a massive data set but I got just six images in and already found an annotation error. You can see below where a Jack of Hearts (which should be JH) is labeled as ‘42’. There’s plenty more of those and that’s going to completely confuse any model. So I’d say the first step is either to fork that dataset and clean it up or find a better dataset.

And also:

Oh wow. That’s very interesting.

Any idea why I am able to fork some repos, but not others? For example, I would have liked to use this dataset, which is one of the few that actually has a decent number of star ratings on it:

But this set doesn’t even load in my iOS app, apparently because the iOS SDK only supports “Roboflow v3” models. I can’t even tell what kind of model that set is. But I am able to see “fork” as an option for some models, but not others.

Not sure if I should try “Download” the above project and then be able to use it in my own project?

Actually, I found the “fork” option on the image list. I was looking in the wrong place. I’m guessing that doesn’t include the annotations, though.

1 Like

Usually annotations come along with the fork. But if they don’t always, I’m not sure what determines that. They have for me in the past.