no code implementations • 26 Jun 2023 • Li Ding, Jack Terwilliger, Aishni Parab, Meng Wang, Lex Fridman, Bruce Mehler, Bryan Reimer
Non-intrusive, real-time analysis of the dynamics of the eye region allows us to monitor humans' visual attention allocation and estimate their mental state during the performance of real-world tasks, which can potentially benefit a wide range of human-computer interaction (HCI) applications.
no code implementations • 5 Dec 2020 • Sandipan Banerjee, Ajjen Joshi, Jay Turcot, Bryan Reimer, Taniya Mishra
Distracted drivers are dangerous drivers.
no code implementations • 8 Apr 2019 • Jack Terwilliger, Michael Glazer, Henri Schmidt, Josh Domeyer, Heishiro Toyoda, Bruce Mehler, Bryan Reimer, Lex Fridman
Humans, as both pedestrians and drivers, generally skillfully navigate traffic intersections.
no code implementations • 21 Mar 2019 • Li Ding, Jack Terwilliger, Rini Sherony, Bryan Reimer, Lex Fridman
What is not known is how much extra information the temporal dynamics of the visual scene carries that is complimentary to the information available in the individual frames of the video.
no code implementations • 19 Nov 2017 • Lex Fridman, Daniel E. Brown, Michael Glazer, William Angell, Spencer Dodd, Benedikt Jenik, Jack Terwilliger, Aleksandr Patsekin, Julia Kindelsberger, Li Ding, Sean Seaman, Alea Mehler, Andrew Sipperley, Anthony Pettinato, Bobbie Seppelt, Linda Angell, Bruce Mehler, Bryan Reimer
For the foreseeble future, human beings will likely remain an integral part of the driving task, monitoring the AI system as it performs anywhere from just over 0% to just under 100% of the driving.
1 code implementation • 12 Oct 2017 • Lex Fridman, Li Ding, Benedikt Jenik, Bryan Reimer
We consider the paradigm of a black box AI system that makes life-critical decisions.
no code implementations • 14 Jun 2017 • Lex Fridman, Benedikt Jenik, Shaiyan Keshvari, Bryan Reimer, Christoph Zetzsche, Ruth Rosenholtz
Foveal vision makes up less than 1% of the visual field.
no code implementations • 3 Dec 2016 • Lex Fridman, Bryan Reimer
We propose a framework for semi-automated annotation of video frames where the video is of an object that at any point in time can be labeled as being in one of a finite number of discrete states.
no code implementations • 26 Nov 2016 • Lex Fridman, Heishiro Toyoda, Sean Seaman, Bobbie Seppelt, Linda Angell, Joonbum Lee, Bruce Mehler, Bryan Reimer
We consider a large dataset of real-world, on-road driving from a 100-car naturalistic study to explore the predictive power of driver glances and, specifically, to answer the following question: what can be predicted about the state of the driver and the state of the driving environment from a 6-second sequence of macro-glances?
no code implementations • 22 Nov 2015 • Irman Abdić, Lex Fridman, Erik Marchi, Daniel E. Brown, William Angell, Bryan Reimer, Björn Schuller
We introduce a recurrent neural network architecture for automated road surface wetness detection from audio of tire-surface interaction.
no code implementations • 17 Aug 2015 • Lex Fridman, Joonbum Lee, Bryan Reimer, Trent Victor
The main insight of the paper is conveyed through the analogy of an "owl" and "lizard" which describes the degree to which the eyes and the head move when shifting gaze.
no code implementations • 16 Jul 2015 • Lex Fridman, Philipp Langhans, Joonbum Lee, Bryan Reimer
In theory, vision-based tracking of the eye can provide a good estimate of gaze location.