Search Results for author: Junfeng He

Found 10 papers, 1 papers with code

Parrot: Pareto-optimal Multi-Reward Reinforcement Learning Framework for Text-to-Image Generation

no code implementations11 Jan 2024 Seung Hyun Lee, Yinxiao Li, Junjie Ke, Innfarn Yoo, Han Zhang, Jiahui Yu, Qifei Wang, Fei Deng, Glenn Entis, Junfeng He, Gang Li, Sangpil Kim, Irfan Essa, Feng Yang

Additionally, Parrot employs a joint optimization approach for the T2I model and the prompt expansion network, facilitating the generation of quality-aware text prompts, thus further enhancing the final image quality.

Reinforcement Learning (RL) Text-to-Image Generation

UniAR: Unifying Human Attention and Response Prediction on Visual Content

no code implementations15 Dec 2023 Peizhao Li, Junfeng He, Gang Li, Rachit Bhargava, Shaolei Shen, Nachiappan Valliappan, Youwei Liang, Hongxiang Gu, Venky Ramachandran, Golnaz Farhadi, Yang Li, Kai J Kohlhoff, Vidhya Navalpakkam

Such a model would enable predicting subjective feedback such as overall satisfaction or aesthetic quality ratings, along with the underlying human attention or interaction heatmaps and viewing order, enabling designers and content-creation models to optimize their creation for human-centric improvements.

Rich Human Feedback for Text-to-Image Generation

1 code implementation15 Dec 2023 Youwei Liang, Junfeng He, Gang Li, Peizhao Li, Arseniy Klimovskiy, Nicholas Carolan, Jiao Sun, Jordi Pont-Tuset, Sarah Young, Feng Yang, Junjie Ke, Krishnamurthy Dj Dvijotham, Katie Collins, Yiwen Luo, Yang Li, Kai J Kohlhoff, Deepak Ramachandran, Vidhya Navalpakkam

We show that the predicted rich human feedback can be leveraged to improve image generation, for example, by selecting high-quality training data to finetune and improve the generative models, or by creating masks with predicted heatmaps to inpaint the problematic regions.

Text-to-Image Generation

Learning From Unique Perspectives: User-Aware Saliency Modeling

no code implementations CVPR 2023 Shi Chen, Nachiappan Valliappan, Shaolei Shen, Xinyu Ye, Kai Kohlhoff, Junfeng He

Our work aims to advance attention research from three distinct perspectives: (1) We present a new model with the flexibility to capture attention patterns of various combinations of users, so that we can adaptively predict personalized attention, user group attention, and general saliency at the same time with one single model; (2) To augment models with knowledge about the composition of attention from different users, we further propose a principled learning method to understand visual attention in a progressive manner; and (3) We carry out extensive analyses on publicly available saliency datasets to shed light on the roles of visual preferences.

Teacher-Generated Spatial-Attention Labels Boost Robustness and Accuracy of Contrastive Models

no code implementations CVPR 2023 Yushi Yao, Chang Ye, Junfeng He, Gamaleldin F. Elsayed

We then traina model with a primary contrastive objective; to this stan-dard configuration, we add a simple output head trained topredict the attentional map for each image, guided by thepseudo labels from teacher model.

Image Retrieval Retrieval

Deep Saliency Prior for Reducing Visual Distraction

no code implementations CVPR 2022 Kfir Aberman, Junfeng He, Yossi Gandelsman, Inbar Mosseri, David E. Jacobs, Kai Kohlhoff, Yael Pritch, Michael Rubinstein

Using only a model that was trained to predict where people look at images, and no additional training data, we can produce a range of powerful editing effects for reducing distraction in images.

Origin of the Electronic Structure in Single-Layer FeSe/SrTiO3 Films

no code implementations16 Dec 2020 Defa Liu, Xianxin Wu, Fangsen Li, Yong Hu, Jianwei Huang, Yu Xu, Cong Li, Yunyi Zang, Junfeng He, Lin Zhao, Shaolong He, Chenjia Tang, Zhi Li, Lili Wang, Qingyan Wang, Guodong Liu, Zuyan Xu, Xu-Cun Ma, Qi-Kun Xue, Jiangping Hu, X. J. Zhou

These observations not only show the first direct evidence that the electronic structure of single-layer FeSe/SrTiO3 films originates from bulk FeSe through a combined effect of an electronic phase transition and an interfacial charge transfer, but also provide a quantitative basis for theoretical models in describing the electronic structure and understanding the superconducting mechanism in single-layer FeSe/SrTiO3 films.

Band Gap Superconductivity Strongly Correlated Electrons

GazeGAN - Unpaired Adversarial Image Generation for Gaze Estimation

no code implementations27 Nov 2017 Matan Sela, Pingmei Xu, Junfeng He, Vidhya Navalpakkam, Dmitry Lagun

Recent research has demonstrated the ability to estimate gaze on mobile devices by performing inference on the image from the phone's front-facing camera, and without requiring specialized hardware.

Gaze Estimation Image Generation +1

Collaborative Hashing

no code implementations CVPR 2014 Xianglong Liu, Junfeng He, Cheng Deng, Bo Lang

Hashing technique has become a promising approach for fast similarity search.

Image Retrieval

Hash Bit Selection: A Unified Solution for Selection Problems in Hashing

no code implementations CVPR 2013 Xianglong Liu, Junfeng He, Bo Lang, Shih-Fu Chang

We represent the bit pool as a vertx- and edge-weighted graph with the candidate bits as vertices.

Cannot find the paper you are looking for? You can Submit a new open access paper.