Search Results for author: Arkady Zgonnikov

Found 12 papers, 3 papers with code

In the driver's mind: modeling the dynamics of human overtaking decisions in interactions with oncoming automated vehicles

no code implementations28 Mar 2024 Samir H. A. Mohammad, Haneen Farah, Arkady Zgonnikov

To address these gaps, we conducted a "reverse Wizard-of-Oz" driving simulator experiment with 30 participants who repeatedly interacted with oncoming AVs and HDVs, measuring the drivers' gap acceptance decisions and response times.

Are you sure? Modelling Drivers' Confidence Judgments in Left-Turn Gap Acceptance Decisions

no code implementations11 Mar 2024 Floor Bontje, Arkady Zgonnikov

We found that confidence in these decisions depends on the size of the gap to the oncoming vehicle.

Robust Multi-Modal Density Estimation

no code implementations19 Jan 2024 Anna Mészáros, Julian F. Schumann, Javier Alonso-Mora, Arkady Zgonnikov, Jens Kober

We compared our approach to state-of-the-art methods for density estimation as well as ablations of ROME, showing that it not only outperforms established methods but is also more robust to a variety of distributions.

Density Estimation

Data-driven Semi-supervised Machine Learning with Surrogate Safety Measures for Abnormal Driving Behavior Detection

no code implementations7 Dec 2023 Lanxin Zhang, Yongqi Dong, Haneen Farah, Arkady Zgonnikov, Bart van Arem

Moreover, previous ML-based approaches predominantly utilize basic vehicle motion features (such as velocity and acceleration) to label and detect abnormal driving behaviors, while this study seeks to introduce Surrogate Safety Measures (SSMs) as the input features for ML models to improve the detection performance.

Anomaly Detection

A cognitive process approach to modeling gap acceptance in overtaking

no code implementations8 Jun 2023 Samir H. A. Mohammad, Haneen Farah, Arkady Zgonnikov

In this study, we address this issue by employing a cognitive process approach to describe the dynamic interactions by the oncoming vehicle during overtaking maneuvers.

Decision Making

Smooth-Trajectron++: Augmenting the Trajectron++ behaviour prediction model with smooth attention

1 code implementation31 May 2023 Frederik S. B. Westerhout, Julian F. Schumann, Arkady Zgonnikov

Understanding traffic participants' behaviour is crucial for predicting their future trajectories, aiding in developing safe and reliable planning systems for autonomous vehicles.

Autonomous Driving Trajectory Forecasting

Using Models Based on Cognitive Theory to Predict Human Behavior in Traffic: A Case Study

no code implementations24 May 2023 Julian F. Schumann, Aravinda Ramakrishnan Srinivasan, Jens Kober, Gustav Markkula, Arkady Zgonnikov

The development of automated vehicles has the potential to revolutionize transportation, but they are currently unable to ensure a safe and time-efficient driving style.

Decision Making

Benchmark for Models Predicting Human Behavior in Gap Acceptance Scenarios

no code implementations10 Nov 2022 Julian Frederik Schumann, Jens Kober, Arkady Zgonnikov

Autonomous vehicles currently suffer from a time-inefficient driving style caused by uncertainty about human behavior in traffic interactions.

Autonomous Vehicles Trajectory Planning

MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning

1 code implementation30 Dec 2021 Markus Peschl, Arkady Zgonnikov, Frans A. Oliehoek, Luciano C. Siebert

Inferring reward functions from demonstrations and pairwise preferences are auspicious approaches for aligning Reinforcement Learning (RL) agents with human intentions.

Active Learning Ethics +1

Meaningful human control: actionable properties for AI system development

no code implementations25 Nov 2021 Luciano Cavalcante Siebert, Maria Luce Lupetti, Evgeni Aizenberg, Niek Beckers, Arkady Zgonnikov, Herman Veluwenkamp, David Abbink, Elisa Giaccardi, Geert-Jan Houben, Catholijn M. Jonker, Jeroen van den Hoven, Deborah Forster, Reginald L. Lagendijk

The concept of meaningful human control has been proposed to address responsibility gaps and mitigate them by establishing conditions that enable a proper attribution of responsibility for humans; however, clear requirements for researchers, designers, and engineers are yet inexistent, making the development of AI-based systems that remain under meaningful human control challenging.

Optimality and limitations of audio-visual integration for cognitive systems

no code implementations2 Dec 2019 W. Paul Boyce, Tony Lindsay, Arkady Zgonnikov, Ignacio Rano, KongFatt Wong-Lin

In particular, the same optimal computational model can lead to illusory percepts, and we suggest that more studies should be needed to detect and mitigate these illusions, as artefacts in artificial cognitive systems.

Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.