Search Results for author: Gregory J. Zelinsky

Found 5 papers, 1 papers with code

Reconstruction-guided attention improves the robustness and shape processing of neural networks

1 code implementation27 Sep 2022 Seoyoung Ahn, Hossein Adeli, Gregory J. Zelinsky

Ablation studies further reveal two complementary roles of spatial and feature-based attention in robust object recognition, with the former largely consistent with spatial masking benefits in the attention literature (the reconstruction serves as a mask) and the latter mainly contributing to the model's inference speed (i. e., number of time steps to reach a certain confidence threshold) by reducing the space of possible object hypotheses.

Object Object Recognition +1

Visual attention analysis of pathologists examining whole slide images of Prostate cancer

no code implementations17 Feb 2022 Souradeep Chakraborty, Ke Ma, Rajarsi Gupta, Beatrice Knudsen, Gregory J. Zelinsky, Joel H. Saltz, Dimitris Samaras

To quantify the relationship between a pathologist's attention and evidence for cancer in the WSI, we obtained tumor annotations from a genitourinary specialist.

Navigate whole slide images

Predicting Goal-directed Attention Control Using Inverse-Reinforcement Learning

no code implementations31 Jan 2020 Gregory J. Zelinsky, Yupei Chen, Seoyoung Ahn, Hossein Adeli, Zhibo Yang, Lihan Huang, Dimitrios Samaras, Minh Hoai

Using machine learning and the psychologically-meaningful principle of reward, it is possible to learn the visual features used in goal-directed attention control.

BIG-bench Machine Learning reinforcement-learning +1

Studying Relationships between Human Gaze, Description, and Computer Vision

no code implementations CVPR 2013 Kiwon Yun, Yifan Peng, Dimitris Samaras, Gregory J. Zelinsky, Tamara L. Berg

We posit that user behavior during natural viewing of images contains an abundance of information about the content of images as well as information related to user intent and user defined content importance.

Cannot find the paper you are looking for? You can Submit a new open access paper.