Search Results for author: Iuliia Kotseruba

Found 20 papers, 8 papers with code

Data Limitations for Modeling Top-Down Effects on Drivers' Attention

1 code implementation12 Apr 2024 Iuliia Kotseruba, John K. Tsotsos

The crux of the problem is lack of public data with annotations that could be used to train top-down models and evaluate how well models of any kind capture effects of task on attention.

Gaze Prediction

SCOUT+: Towards Practical Task-Driven Drivers' Gaze Prediction

1 code implementation12 Apr 2024 Iuliia Kotseruba, John K. Tsotsos

In this paper, we address the challenge of effective modeling of task and context with common sources of data for use in practical systems.

Gaze Prediction

Understanding and Modeling the Effects of Task and Context on Drivers' Gaze Allocation

1 code implementation13 Oct 2023 Iuliia Kotseruba, John K. Tsotsos

Therefore, to enable analysis and modeling of these factors for drivers' gaze prediction, we propose the following: 1) we correct the data processing pipeline used in DR(eye)VE to reduce noise in the recorded gaze data; 2) we then add per-frame labels for driving task and context; 3) we benchmark a number of baseline and SOTA models for saliency and driver gaze prediction and use new annotations to analyze how their performance changes in scenarios involving different tasks; and, lastly, 4) we develop a novel model that modulates drivers' gaze prediction with explicit action and context information.

Gaze Prediction

Intend-Wait-Perceive-Cross: Exploring the Effects of Perceptual Limitations on Pedestrian Decision-Making

no code implementations8 Feb 2023 Iuliia Kotseruba, Amir Rasouli

Current research on pedestrian behavior understanding focuses on the dynamics of pedestrians and makes strong assumptions about their perceptual abilities.

Decision Making

NeurIPS 2022 Competition: Driving SMARTS

no code implementations14 Nov 2022 Amir Rasouli, Randy Goebel, Matthew E. Taylor, Iuliia Kotseruba, Soheil Alizadeh, Tianpei Yang, Montgomery Alban, Florian Shkurti, Yuzheng Zhuang, Adam Scibior, Kasra Rezaee, Animesh Garg, David Meger, Jun Luo, Liam Paull, Weinan Zhang, Xinyu Wang, Xi Chen

The proposed competition supports methodologically diverse solutions, such as reinforcement learning (RL) and offline learning methods, trained on a combination of naturalistic AD data and open-source simulation platform SMARTS.

Autonomous Driving Reinforcement Learning (RL)

PedFormer: Pedestrian Behavior Prediction via Cross-Modal Attention Modulation and Gated Multitask Learning

no code implementations14 Oct 2022 Amir Rasouli, Iuliia Kotseruba

To address this challenge, we propose a novel framework that relies on different data modalities to predict future trajectories and crossing actions of pedestrians from an ego-centric perspective.

Industry and Academic Research in Computer Vision

no code implementations10 Jul 2021 Iuliia Kotseruba, Manos Papagelis, John K. Tsotsos

The results indicate that the distribution of the research topics is similar in industry and academic papers.

Behavioral Research and Practical Models of Drivers' Attention

1 code implementation12 Apr 2021 Iuliia Kotseruba, John K. Tsotsos

Drivers deal with multiple concurrent tasks, such as keeping the vehicle in the lane, observing and anticipating the actions of other road users, reacting to hazards, and dealing with distractions inside and outside the vehicle.

On the Control of Attentional Processes in Vision

no code implementations5 Jan 2021 John K. Tsotsos, Omar Abid, Iuliia Kotseruba, Markus D. Solbach

The key conclusions of this paper are that an executive controller is necessary for human attentional function in vision, and that there is a 'first principles' computational approach to its understanding that is complementary to the previous approaches that focus on modelling or learning from experimental observations directly.

Pedestrian Action Anticipation using Contextual Feature Fusion in Stacked RNNs

1 code implementation13 May 2020 Amir Rasouli, Iuliia Kotseruba, John K. Tsotsos

To this end, we propose a solution for the problem of pedestrian action anticipation at the point of crossing.

Action Anticipation Autonomous Vehicles

Do Saliency Models Detect Odd-One-Out Targets? New Datasets and Evaluations

2 code implementations13 May 2020 Iuliia Kotseruba, Calden Wloka, Amir Rasouli, John K. Tsotsos

Furthermore, we investigate the effect of training state-of-the-art CNN-based saliency models on these types of stimuli and conclude that the additional training data does not lead to a significant improvement of their ability to find odd-one-out targets.

Odd One Out

A Possible Reason for why Data-Driven Beats Theory-Driven Computer Vision

no code implementations28 Aug 2019 John K. Tsotsos, Iuliia Kotseruba, Alexander Andreopoulos, Yulong Wu

This reveals a strong mismatch between optimal performance ranges of classical theory-driven algorithms and sensor setting distributions in the common vision datasets, while data-driven models were trained for those datasets.

Rapid Visual Categorization is not Guided by Early Salience-Based Selection

no code implementations15 Jan 2019 John K. Tsotsos, Iuliia Kotseruba, Calden Wloka

The current dominant visual processing paradigm in both human and machine research is the feedforward, layered hierarchy of neural-like processing elements.

SMILER: Saliency Model Implementation Library for Experimental Research

1 code implementation20 Dec 2018 Calden Wloka, Toni Kunić, Iuliia Kotseruba, Ramin Fahimi, Nicholas Frosst, Neil D. B. Bruce, John K. Tsotsos

The Saliency Model Implementation Library for Experimental Research (SMILER) is a new software package which provides an open, standardized, and extensible framework for maintaining and executing computational saliency models.

Visual Attention and its Intimate Links to Spatial Cognition

no code implementations29 Jun 2018 John K. Tsotsos, Iuliia Kotseruba, Amir Rasouli, Markus D. Solbach

It is almost universal to regard attention as the facility that permits an agent, human or machine, to give priority processing resources to relevant stimuli while ignoring the irrelevant.

Active Fixation Control to Predict Saccade Sequences

2 code implementations CVPR 2018 Calden Wloka, Iuliia Kotseruba, John K. Tsotsos

However, on static images the emphasis of these models has largely been based on non-ordered prediction of fixations through a saliency map.

Saccade Sequence Prediction: Beyond Static Saliency Maps

no code implementations29 Nov 2017 Calden Wloka, Iuliia Kotseruba, John K. Tsotsos

The accuracy of such models has dramatically increased recently due to deep learning.

STAR-RT: Visual attention for real-time video game playing

no code implementations26 Nov 2017 Iuliia Kotseruba, John K. Tsotsos

In this paper we present STAR-RT - the first working prototype of Selective Tuning Attention Reference (STAR) model and Cognitive Programs (CPs).

A Review of 40 Years of Cognitive Architecture Research: Core Cognitive Abilities and Practical Applications

no code implementations27 Oct 2016 Iuliia Kotseruba, John K. Tsotsos

Thus, in this survey we wanted to shift the focus towards a more inclusive and high-level overview of the research on cognitive architectures.

Joint Attention in Autonomous Driving (JAAD)

no code implementations15 Sep 2016 Iuliia Kotseruba, Amir Rasouli, John K. Tsotsos

In this paper we present a novel dataset for a critical aspect of autonomous driving, the joint attention that must occur between drivers and of pedestrians, cyclists or other drivers.

Robotics

Cannot find the paper you are looking for? You can Submit a new open access paper.