Search Results for author: Kai Kohlhoff

Found 3 papers, 1 papers with code

Learning From Unique Perspectives: User-Aware Saliency Modeling

no code implementations CVPR 2023 Shi Chen, Nachiappan Valliappan, Shaolei Shen, Xinyu Ye, Kai Kohlhoff, Junfeng He

Our work aims to advance attention research from three distinct perspectives: (1) We present a new model with the flexibility to capture attention patterns of various combinations of users, so that we can adaptively predict personalized attention, user group attention, and general saliency at the same time with one single model; (2) To augment models with knowledge about the composition of attention from different users, we further propose a principled learning method to understand visual attention in a progressive manner; and (3) We carry out extensive analyses on publicly available saliency datasets to shed light on the roles of visual preferences.

Deep Saliency Prior for Reducing Visual Distraction

no code implementations CVPR 2022 Kfir Aberman, Junfeng He, Yossi Gandelsman, Inbar Mosseri, David E. Jacobs, Kai Kohlhoff, Yael Pritch, Michael Rubinstein

Using only a model that was trained to predict where people look at images, and no additional training data, we can produce a range of powerful editing effects for reducing distraction in images.

Tensor field networks: Rotation- and translation-equivariant neural networks for 3D point clouds

3 code implementations22 Feb 2018 Nathaniel Thomas, Tess Smidt, Steven Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, Patrick Riley

We introduce tensor field neural networks, which are locally equivariant to 3D rotations, translations, and permutations of points at every layer.

Data Augmentation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.