DR(eye)VE is a large dataset of driving scenes for which eye-tracking annotations are available. This dataset features more than 500,000 registered frames, matching ego-centric views (from glasses worn by drivers) and car-centric views (from roof-mounted camera), further enriched by other sensors measurements.
21 PAPERS • NO BENCHMARKS YET
These images were generated using Blender and IEE-Simulator with different head-poses, where the images are labelled according to nine classes (straight, turned bottom-left, turned left, turned top-left, turned bottom-right, turned right, turned top-right, reclined, looking up). The dataset contains 16,013 training images and 2,825 testing images, in addition to 4,700 images for improvements.
4 PAPERS • NO BENCHMARKS YET
These images were generated using UnityEyes simulator, after including essential eyeball physiology elements and modeling binocular vision dynamics. The images are annotated with head pose and gaze direction information, besides 2D and 3D landmarks of eye's most important features. Additionally, the images are distributed into two classes denoting the status of the eye (Open for open eyes, Closed for closed eyes). This dataset was used to train a DNN model for detecting drowsiness status of a driver. The dataset contains 1,704 training images, 4,232 testing images and additional 4,103 images for improvements.
These images were generated using UnityEyes simulator, after including essential eyeball physiology elements and modeling binocular vision dynamics. The images are annotated with head pose and gaze direction information, besides 2D and 3D landmarks of eye's most important features. Additionally, the images are distributed into eight classes denoting the gaze direction of a driver's eyes (TopLeft, TopRight, TopCenter, MiddleLeft, MiddleRight, BottomLeft, BottomRight, BottomCenter). This dataset was used to train a DNN model for estimating the gaze direction. The dataset contains 61,063 training images, 132,630 testing images and additional 72,000 images for improvement.
3 PAPERS • NO BENCHMARKS YET
Description: 1,003 People-Driver Behavior Collection Data. The data includes multiple ages and multiple time periods. The driver behaviors includes Dangerous behavior, fatigue behavior and visual movement behavior. In terms of device, binocular cameras of RGB and infrared channels were applied. This data can be used for tasks such as driver behavior analysis.
2 PAPERS • NO BENCHMARKS YET
The Model for Attended Awareness in Driving (MAAD) is a dataset of third-person estimates of a driver’s attended awareness. It consists of videos of a scene, as seen by a person performing a task in the scene, along with noisily registered ego-centric gaze sequences from that person.
1 PAPER • NO BENCHMARKS YET
Description: 304 People Multi-race - Driver Behavior Collection Data. The data includes multiple ages, multiple time periods and multiple races (Caucasian, Black, Indian). The driver behaviors includes dangerous behavior, fatigue behavior and visual movement behavior. In terms of device, binocular cameras of RGB and infrared channels were applied. This data can be used for tasks such as driver behavior analysis.
0 PAPER • NO BENCHMARKS YET
The dataset includes 304 People Multi-race - Driver Behavior Collection Data. The data includes multiple ages, multiple time periods and multiple races (Caucasian, Black, Indian). The driver behaviors includes dangerous behavior, fatigue behavior and visual movement behavior. In terms of device, binocular cameras of RGB and infrared channels were applied.