1 code implementation • 1 Jun 2023 • Hossein Adeli, Seoyoung Ahn, Nikolaus Kriegeskorte, Gregory Zelinsky
We found that our models of affinity spread that were built on feature maps from the self-supervised Transformers showed significant improvement over baseline and CNN based models on predicting reaction time patterns of humans, despite not being trained on the task or with any other object labels.
1 code implementation • 27 Sep 2022 • Seoyoung Ahn, Hossein Adeli, Gregory J. Zelinsky
Ablation studies further reveal two complementary roles of spatial and feature-based attention in robust object recognition, with the former largely consistent with spatial masking benefits in the attention literature (the reconstruction serves as a mask) and the latter mainly contributing to the model's inference speed (i. e., number of time steps to reach a certain confidence threshold) by reducing the space of possible object hypotheses.
1 code implementation • 11 Oct 2021 • Hossein Adeli, Seoyoung Ahn, Gregory Zelinsky
The visual system processes a scene using a sequence of selective glimpses, each driven by spatial and object-based attention.
no code implementations • 31 Jan 2020 • Gregory J. Zelinsky, Yupei Chen, Seoyoung Ahn, Hossein Adeli, Zhibo Yang, Lihan Huang, Dimitrios Samaras, Minh Hoai
Using machine learning and the psychologically-meaningful principle of reward, it is possible to learn the visual features used in goal-directed attention control.
no code implementations • 23 Nov 2018 • Hossein Adeli, Gregory Zelinsky
Here we extend this work by building a more brain-inspired deep network model of the primate ATTention Network (ATTNet) that learns to shift its attention so as to maximize the reward.
no code implementations • NeurIPS 2016 • Zijun Wei, Hossein Adeli, Minh Hoai Nguyen, Greg Zelinsky, Dimitris Samaras
Learned region sparsity has achieved state-of-the-art performance in classification tasks by exploiting and integrating a sparse set of local information into global decisions.