no code implementations • 26 Mar 2024 • Chuhan Jiao, Yao Wang, Guanhua Zhang, Mihai Bâce, Zhiming Hu, Andreas Bulling
We present DiffGaze, a novel method for generating realistic and diverse continuous human gaze sequences on 360{\deg} images based on a conditional score-based denoising diffusion model.
no code implementations • 20 Mar 2024 • Florian Strohm, Mihai Bâce, Andreas Bulling
At the core of our method is a Siamese convolutional neural encoder that learns the user embeddings by contrasting the image and personal saliency map pairs of different users.
no code implementations • 20 Mar 2024 • Florian Strohm, Mihai Bâce, Markus Kaltenecker, Andreas Bulling
To ensure that the desired feature measurement is changed towards the target value without altering uncorrelated features, we introduced a novel semantic face feature loss.
no code implementations • 20 Jun 2023 • Anna Penzkofer, Simon Schaefer, Florian Strohm, Mihai Bâce, Stefan Leutenegger, Andreas Bulling
We show that intentions of human players, i. e. the precursor of goal-oriented decisions, can be robustly predicted from eye gaze even for the long-horizon sparse rewards task of Montezuma's Revenge - one of the most challenging RL tasks in the Atari2600 game suite.
1 code implementation • COLING 2022 • Adnen Abdessaied, Mihai Bâce, Andreas Bulling
We propose Neuro-Symbolic Visual Dialog (NSVD) -the first method to combine deep learning and symbolic program execution for multi-round visually-grounded reasoning.
no code implementations • 4 Dec 2021 • Yao Wang, Mihai Bâce, Andreas Bulling
We propose Unified Model of Saliency and Scanpaths (UMSS) -- a model that learns to predict visual saliency and scanpaths (i. e. sequences of eye fixations) on information visualisations.
no code implementations • ICCV 2021 • Florian Strohm, Ekta Sood, Sven Mayer, Philipp Müller, Mihai Bâce, Andreas Bulling
The encoder extracts image features and predicts a neural activation map for each face looked at by a human observer.
no code implementations • 25 Jul 2019 • Mihai Bâce, Sander Staal, Andreas Bulling
With an ever-increasing number of mobile devices competing for our attention, quantifying when, how often, or for how long users visually attend to their devices has emerged as a core challenge in mobile human-computer interaction.
no code implementations • 25 Jul 2019 • Mihai Bâce, Sander Staal, Andreas Bulling
Moreover, we discuss how our method enables the calculation of additional attention metrics that, for the first time, enable researchers from different domains to study and quantify attention allocation during mobile interactions in the wild.