DR(eye)VE is a large dataset of driving scenes for which eye-tracking annotations are available. This dataset features more than 500,000 registered frames, matching ego-centric views (from glasses worn by drivers) and car-centric views (from roof-mounted camera), further enriched by other sensors measurements.
29 PAPERS • NO BENCHMARKS YET
Dataset Statistics: The statistics of our dataset are summarized and compared with the largest existing dataset (DR(eye)VE) [1] in Table 1. Our dataset was collected using videos selected from a publicly available, large-scale, crowd-sourced driving video dataset, BDD100k [30, 31]. BDD100K contains human-demonstrated dashboard videos and time-stamped sensor measurements collected during urban driving in various weather and lighting conditions. To efficiently collect attention data for critical driving situations, we specifically selected video clips that both included braking events and took place in busy areas (see supplementary materials for technical details). We then trimmed videos to include 6.5 seconds prior to and 3.5 seconds after each braking event. It turned out that other driving actions, e.g., turning, lane switching and accelerating, were also included. 1,232 videos (=3.5 hours) in total were collected following these procedures. Some example images from our dataset are sh
20 PAPERS • NO BENCHMARKS YET
These images were generated using Blender and IEE-Simulator with different head-poses, where the images are labelled according to nine classes (straight, turned bottom-left, turned left, turned top-left, turned bottom-right, turned right, turned top-right, reclined, looking up). The dataset contains 16,013 training images and 2,825 testing images, in addition to 4,700 images for improvements.
4 PAPERS • NO BENCHMARKS YET
These images were generated using UnityEyes simulator, after including essential eyeball physiology elements and modeling binocular vision dynamics. The images are annotated with head pose and gaze direction information, besides 2D and 3D landmarks of eye's most important features. Additionally, the images are distributed into two classes denoting the status of the eye (Open for open eyes, Closed for closed eyes). This dataset was used to train a DNN model for detecting drowsiness status of a driver. The dataset contains 1,704 training images, 4,232 testing images and additional 4,103 images for improvements.
These images were generated using UnityEyes simulator, after including essential eyeball physiology elements and modeling binocular vision dynamics. The images are annotated with head pose and gaze direction information, besides 2D and 3D landmarks of eye's most important features. Additionally, the images are distributed into eight classes denoting the gaze direction of a driver's eyes (TopLeft, TopRight, TopCenter, MiddleLeft, MiddleRight, BottomLeft, BottomRight, BottomCenter). This dataset was used to train a DNN model for estimating the gaze direction. The dataset contains 61,063 training images, 132,630 testing images and additional 72,000 images for improvement.
3 PAPERS • NO BENCHMARKS YET
The Model for Attended Awareness in Driving (MAAD) is a dataset of third-person estimates of a driver’s attended awareness. It consists of videos of a scene, as seen by a person performing a task in the scene, along with noisily registered ego-centric gaze sequences from that person.
2 PAPERS • NO BENCHMARKS YET