Search Results for author: Seoyoung Ahn

Found 9 papers, 7 papers with code

Affinity-based Attention in Self-supervised Transformers Predicts Dynamics of Object Grouping in Humans

1 code implementation1 Jun 2023 Hossein Adeli, Seoyoung Ahn, Nikolaus Kriegeskorte, Gregory Zelinsky

We found that our models of affinity spread that were built on feature maps from the self-supervised Transformers showed significant improvement over baseline and CNN based models on predicting reaction time patterns of humans, despite not being trained on the task or with any other object labels.

Object Representation Learning

Gazeformer: Scalable, Effective and Fast Prediction of Goal-Directed Human Attention

1 code implementation CVPR 2023 Sounak Mondal, Zhibo Yang, Seoyoung Ahn, Dimitris Samaras, Gregory Zelinsky, Minh Hoai

In response, we pose a new task called ZeroGaze, a new variant of zero-shot learning where gaze is predicted for never-before-searched objects, and we develop a novel model, Gazeformer, to solve the ZeroGaze problem.

Gaze Prediction Language Modelling +2

Unifying Top-down and Bottom-up Scanpath Prediction Using Transformers

1 code implementation16 Mar 2023 Zhibo Yang, Sounak Mondal, Seoyoung Ahn, Ruoyu Xue, Gregory Zelinsky, Minh Hoai, Dimitris Samaras

Most models of visual attention aim at predicting either top-down or bottom-up control, as studied using different visual search and free-viewing tasks.

Scanpath prediction

Reconstruction-guided attention improves the robustness and shape processing of neural networks

1 code implementation27 Sep 2022 Seoyoung Ahn, Hossein Adeli, Gregory J. Zelinsky

Ablation studies further reveal two complementary roles of spatial and feature-based attention in robust object recognition, with the former largely consistent with spatial masking benefits in the attention literature (the reconstruction serves as a mask) and the latter mainly contributing to the model's inference speed (i. e., number of time steps to reach a certain confidence threshold) by reducing the space of possible object hypotheses.

Object Object Recognition +1

Target-absent Human Attention

1 code implementation4 Jul 2022 Zhibo Yang, Sounak Mondal, Seoyoung Ahn, Gregory Zelinsky, Minh Hoai, Dimitris Samaras

In this paper, we propose the first data-driven computational model that addresses the search-termination problem and predicts the scanpath of search fixations made by people searching for targets that do not appear in images.

Imitation Learning

Recurrent Attention Models with Object-centric Capsule Representation for Multi-object Recognition

1 code implementation11 Oct 2021 Hossein Adeli, Seoyoung Ahn, Gregory Zelinsky

The visual system processes a scene using a sequence of selective glimpses, each driven by spatial and object-based attention.

Object Object Recognition

Predicting Goal-directed Attention Control Using Inverse-Reinforcement Learning

no code implementations31 Jan 2020 Gregory J. Zelinsky, Yupei Chen, Seoyoung Ahn, Hossein Adeli, Zhibo Yang, Lihan Huang, Dimitrios Samaras, Minh Hoai

Using machine learning and the psychologically-meaningful principle of reward, it is possible to learn the visual features used in goal-directed attention control.

BIG-bench Machine Learning reinforcement-learning +1

Effects of Linguistic Labels on Learned Visual Representations in Convolutional Neural Networks: Labels matter!

no code implementations25 Sep 2019 Seoyoung Ahn, Gregory Zelinsky, Gary Lupyan

We investigated the changes in visual representations learnt by CNNs when using different linguistic labels (e. g., trained with basic-level labels only, superordinate-level only, or both at the same time) and how they compare to human behavior when asked to select which of three images is most different.

Odd One Out

Cannot find the paper you are looking for? You can Submit a new open access paper.