The UFPR-Periocular dataset has 16,830 images of both eyes (33,660 cropped images of each eye) from 1,122 subjects (2,244 classes).
All the images were captured by the participant using their own smartphone through a mobile application (app) developed by the authors. There are 15 samples from each subject's eye, obtained in 3 sessions (5 images per session) with a minimum interval of 8 hours between the sessions.
The images were collected from June 2019 to January 2020 and have several resolutions varying from 360×160 to 1862×1008 pixels – depending on the mobile device used to capture the image. In total, the dataset has images captured from 196 different mobile devices.
Each subject captured their images using the same device model. This dataset's main intra- and inter-class variability are caused by lighting variation, occlusion, specular reflection, blur, motion blur, eyeglasses, off-angle, eye-gaze, makeup, and facial expression.
The authors manually annotated the eye corner of all images with 4 points (inside and outside eye corners) and used it to normalize the periocular region regarding scale and rotation. All the original and cropped periocular images, eye-corner annotations, and experimental protocol files are publicly available for the research community (upon request).
The paper contains information about images' distributions by gender, age, resolution, and other experiments' details and benchmarks.Source: A new periocular dataset collected by mobile devices in unconstrained scenarios