License Plate Recognition (LPR) plays a critical role in various applications, such as toll collection, parking management, and traffic law enforcement.
As a result, the generated images have a Structural Similarity Index Measure (SSIM) of less than 0. 10.
This work draws attention to the large fraction of near-duplicates in the training and test sets of datasets widely adopted in License Plate Recognition (LPR) research.
Our findings show that GAN-based techniques and spatial-level transformations are the most promising for improving the learning of deep models on this problem, with the StarGANv2 + F with a probability of 0. 3 achieving the highest F-score value on the Ricord1a dataset in the unified training strategy.
The License Plate Recognition (LPR) field has made impressive advances in the last decade due to novel deep learning approaches combined with the increased availability of training data.
This work introduces a new ZSAR method based on the relationships of actions-objects and actions-descriptive sentences.
To the best of our knowledge, this is the first time SDEs have been used for such an application.
We performed experiments on eight datasets, four collected in Brazil and four in mainland China, and observed that each dataset has a unique, identifiable "signature" since a lightweight classification model predicts the source dataset of a license plate (LP) image with more than 95% accuracy.
The replacement of analog meters with smart meters is costly, laborious, and far from complete in developing countries.
Automatic License Plate Recognition (ALPR) systems have shown remarkable performance on license plates (LPs) from multiple regions due to advances in deep learning and the increasing availability of datasets.
To the best of our knowledge, this is the first work to represent both videos and labels with descriptive sentences.
We introduce a method to learn unsupervised semantic visual information based on the premise that complex events (e. g., minutes) can be decomposed into simpler events (e. g., a few seconds), and that these simple events are shared across several complex events.
Thus, the use of datasets containing many subjects is essential to assess biometric systems' capacity to extract discriminating information from the periocular region.
Ranked #1 on Image Classification on Imbalanced CUB-200-2011
In this work, we present a robust and efficient solution for counting and identifying train wagons using computer vision and deep learning.
Existing approaches for image-based Automatic Meter Reading (AMR) have been evaluated on images captured in well-controlled scenarios.
Ranked #1 on Meter Reading on UFPR-AMR
Smart meters enable remote and automatic electricity, water and gas consumption reading and are being widely deployed in developed countries.
Ranked #1 on Meter Reading on UFPR-ADMR-v1
The use of the iris and periocular region as biometric traits has been extensively investigated, mainly due to the singularity of the iris features and the use of the periocular region when the image resolution is not sufficient to extract iris information.
To explore our dataset we design a two-stream CNN that simultaneously uses two of the most distinctive and persistent features available: the vehicle's appearance and its license plate.
This paper presents an efficient and layout-independent Automatic License Plate Recognition (ALPR) system based on the state-of-the-art YOLO object detector that contains a unified approach for license plate (LP) detection and layout classification to improve the recognition results using post-processing rules.
Ranked #1 on License Plate Recognition on Caltech Cars
In this work, we propose to detect the iris and periocular regions simultaneously using coarse annotations and two well-known object detectors: YOLOv2 and Faster R-CNN.
This dataset is, to the best of our knowledge, three times larger than the largest public dataset found in the literature and contains a well-defined evaluation protocol to assist the development and evaluation of AMR methods.
Ranked #3 on Meter Reading on UFPR-AMR
In this paper, two approaches for robust iris segmentation based on Fully Convolutional Networks (FCNs) and Generative Adversarial Networks (GANs) are described.
The use of iris as a biometric trait is widely used because of its high level of distinction and uniqueness.
The initial and paramount step for performing this type of recognition is the segmentation of the region of interest, i. e. the sclera.
The iris is considered as the biometric trait with the highest unique probability.
First, in the SSIG dataset, composed of 2, 000 frames from 101 vehicle videos, our system achieved a recognition rate of 93. 53% and 47 Frames Per Second (FPS), performing better than both Sighthound and OpenALPR commercial systems (89. 80% and 93. 03%, respectively) and considerably outperforming previous results (81. 80%).
Ranked #2 on License Plate Recognition on SSIG-SegPlate