Search Results for author: Hossein Adeli

Found 6 papers, 3 papers with code

Affinity-based Attention in Self-supervised Transformers Predicts Dynamics of Object Grouping in Humans

1 code implementation1 Jun 2023 Hossein Adeli, Seoyoung Ahn, Nikolaus Kriegeskorte, Gregory Zelinsky

We found that our models of affinity spread that were built on feature maps from the self-supervised Transformers showed significant improvement over baseline and CNN based models on predicting reaction time patterns of humans, despite not being trained on the task or with any other object labels.

Object Representation Learning

Reconstruction-guided attention improves the robustness and shape processing of neural networks

1 code implementation27 Sep 2022 Seoyoung Ahn, Hossein Adeli, Gregory J. Zelinsky

Ablation studies further reveal two complementary roles of spatial and feature-based attention in robust object recognition, with the former largely consistent with spatial masking benefits in the attention literature (the reconstruction serves as a mask) and the latter mainly contributing to the model's inference speed (i. e., number of time steps to reach a certain confidence threshold) by reducing the space of possible object hypotheses.

Object Object Recognition +1

Recurrent Attention Models with Object-centric Capsule Representation for Multi-object Recognition

1 code implementation11 Oct 2021 Hossein Adeli, Seoyoung Ahn, Gregory Zelinsky

The visual system processes a scene using a sequence of selective glimpses, each driven by spatial and object-based attention.

Object Object Recognition

Predicting Goal-directed Attention Control Using Inverse-Reinforcement Learning

no code implementations31 Jan 2020 Gregory J. Zelinsky, Yupei Chen, Seoyoung Ahn, Hossein Adeli, Zhibo Yang, Lihan Huang, Dimitrios Samaras, Minh Hoai

Using machine learning and the psychologically-meaningful principle of reward, it is possible to learn the visual features used in goal-directed attention control.

BIG-bench Machine Learning reinforcement-learning +1

Learning to attend in a brain-inspired deep neural network

no code implementations23 Nov 2018 Hossein Adeli, Gregory Zelinsky

Here we extend this work by building a more brain-inspired deep network model of the primate ATTention Network (ATTNet) that learns to shift its attention so as to maximize the reward.

Learned Region Sparsity and Diversity Also Predicts Visual Attention

no code implementations NeurIPS 2016 Zijun Wei, Hossein Adeli, Minh Hoai Nguyen, Greg Zelinsky, Dimitris Samaras

Learned region sparsity has achieved state-of-the-art performance in classification tasks by exploiting and integrating a sparse set of local information into global decisions.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.