Car Number Plate and Text Detection

I have been working on a custom project involving car number plate and text detection between number plates. My number plates look like this:

They consist of a city name, a letter code, and some digits (e.g., 0 to 9). Note: There are a total of 50 cities, so I have created 50 city classes (e.g., CityA, CityB, CityC…). There are 20 different combinations of letter codes, so I have created 20 different letter code classes (e.g., XY, YZ, ZX…). The digits are made up of 10 types of digit combinations, so I have created 10 types of digit classes from 0 to 9. In total, I have created 81 classes.

Class 0: Number Plate
Class 1: CityA
Class 2: CityB
Class 3: CityC
Class 51: XY
Class 52: YZ
Class 53: XZ
Class 71: 0
Class 80: 9

My dataset consisted of a total of 2000 pictures, each with a corresponding number plate class. On average, there were 40 classes for 50 cities, 100 classes for 20 different letter codes, and 200 classes for 10 different digits. After annotating all the pictures in the dataset in this manner, I trained my model using YOLOv8. However, in the confusion matrix, it only detected the number plate class but was not able to detect any other classes between the number plates. How can I solve this problem so that my model can detect the number plate class and other classes (texts) between the number plates?

Note: Please don’t suggest using OCR, as I want to detect the texts between the number plates as object.

Hi @Samir

Welcome to the Roboflow forum.

Could you share the project’s Universe page (if public)? It might be worth taking a look at your project’s health check, specifically your class balance.

A model needs a sufficient balance of example images/annotations for each class in order for it to be detected well.

Could you explain why you are against using OCR? Most license plate detection use cases, like some we’ve done on the Roboflow blog, use OCR due to object detection models needing a sufficient number and balance of classes. If you are open to using OCR, you could likely only need to label a smaller number of example images with classes for, as an example: a number plate, city name, letter code and the numbers (as a whole). Then you could run that through OCR.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.