Towards Image-based Automatic Meter Reading in Unconstrained Scenarios: A Robust and Efficient Approach

Existing approaches for image-based Automatic Meter Reading (AMR) have been evaluated on images captured in well-controlled scenarios. However, real-world meter reading presents unconstrained scenarios that are way more challenging due to dirt, various lighting conditions, scale variations, in-plane and out-of-plane rotations, among other factors. In this work, we present an end-to-end approach for AMR focusing on unconstrained scenarios. Our main contribution is the insertion of a new stage in the AMR pipeline, called corner detection and counter classification, which enables the counter region to be rectified -- as well as the rejection of illegible/faulty meters -- prior to the recognition stage. We also introduce a publicly available dataset, called Copel-AMR, that contains 12,500 meter images acquired in the field by the service company's employees themselves, including 2,500 images of faulty meters or cases where the reading is illegible due to occlusions. Experimental evaluation demonstrates that the proposed system, which has three networks operating in a cascaded mode, outperforms all baselines in terms of recognition rate while still being quite efficient. Moreover, as very few reading errors are tolerated in real-world applications, we show that our AMR system achieves impressive recognition rates (i.e., > 99%) when rejecting readings made with lower confidence values.

PDF Abstract

Datasets


Introduced in the Paper:

Copel-AMR

Used in the Paper:

UFPR-AMR

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Meter Reading Copel-AMR Fast-YOLOv4-SmallObj + CDCC-NET + Fast-OCR Rank-1 Recognition Rate 96.98 # 1
Meter Reading Copel-AMR Fast-YOLOv4-SmallObj + Fast-OCR Rank-1 Recognition Rate 95.43 # 2
Meter Reading UFPR-AMR Fast-YOLOv4-SmallObj + CDCC-NET + Fast-OCR Rank-1 Recognition Rate 94.75 # 1
Meter Reading UFPR-AMR Fast-YOLOv4-SmallObj + Fast-OCR Rank-1 Recognition Rate 94.37 # 2

Methods