Search Results for author: Shaofeng Zhang

Found 14 papers, 4 papers with code

Continuous-Multiple Image Outpainting in One-Step via Positional Query and A Diffusion-based Approach

1 code implementation28 Jan 2024 Shaofeng Zhang, Jinfa Huang, Qiang Zhou, Zhibin Wang, Fan Wang, Jiebo Luo, Junchi Yan

At inference, we generate images with arbitrary expansion multiples by inputting an anchor image and its corresponding positional embeddings.

Image Outpainting

GMTR: Graph Matching Transformers

1 code implementation14 Nov 2023 Jinpei Guo, Shaofeng Zhang, Runzhong Wang, Chang Liu, Junchi Yan

Meanwhile, on Pascal VOC, QueryTrans improves the accuracy of NGMv2 from $80. 1\%$ to $\mathbf{83. 3\%}$, and BBGM from $79. 0\%$ to $\mathbf{84. 5\%}$.

 Ranked #1 on Graph Matching on PASCAL VOC (matching accuracy metric)

Graph Attention Graph Matching +2

HAP: Structure-Aware Masked Image Modeling for Human-Centric Perception

1 code implementation NeurIPS 2023 Junkun Yuan, Xinyu Zhang, Hao Zhou, Jian Wang, Zhongwei Qiu, Zhiyin Shao, Shaofeng Zhang, Sifan Long, Kun Kuang, Kun Yao, Junyu Han, Errui Ding, Lanfen Lin, Fei Wu, Jingdong Wang

To further capture human characteristics, we propose a structure-invariant alignment loss that enforces different masked views, guided by the human part prior, to be closely aligned for the same image.

2D Pose Estimation Attribute +3

On the Evaluation and Refinement of Vision-Language Instruction Tuning Datasets

no code implementations10 Oct 2023 Ning Liao, Shaofeng Zhang, Renqiu Xia, Min Cao, Yu Qiao, Junchi Yan

Instead of evaluating the models directly, in this paper, we try to evaluate the Vision-Language Instruction-Tuning (VLIT) datasets.


RegionBLIP: A Unified Multi-modal Pre-training Framework for Holistic and Regional Comprehension

1 code implementation3 Aug 2023 Qiang Zhou, Chaohui Yu, Shaofeng Zhang, Sitong Wu, Zhibing Wang, Fan Wang

To this end, we propose to extract features corresponding to regional objects as soft prompts for LLM, which provides a straightforward and scalable approach and eliminates the need for LLM fine-tuning.

Image Comprehension

Patch-Level Contrasting without Patch Correspondence for Accurate and Dense Contrastive Representation Learning

no code implementations23 Jun 2023 Shaofeng Zhang, Feng Zhu, Rui Zhao, Junchi Yan

On classification tasks, for ViT-S, ADCLR achieves 77. 5% top-1 accuracy on ImageNet with linear probing, outperforming our baseline (DINO) without our devised techniques as plug-in, by 0. 5%.

Instance Segmentation object-detection +4

Localized Contrastive Learning on Graphs

no code implementations8 Dec 2022 Hengrui Zhang, Qitian Wu, Yu Wang, Shaofeng Zhang, Junchi Yan, Philip S. Yu

Contrastive learning methods based on InfoNCE loss are popular in node representation learning tasks on graph-structured data.

Contrastive Learning Data Augmentation +1

ESCo: Towards Provably Effective and Scalable Contrastive Representation Learning

no code implementations29 Sep 2021 Hengrui Zhang, Qitian Wu, Shaofeng Zhang, Junchi Yan, David Wipf, Philip S. Yu

In this paper, we propose ESCo (Effective and Scalable Contrastive), a new contrastive framework which is essentially an instantiation of the Information Bottleneck principle under self-supervised learning settings.

Contrastive Learning Representation Learning +1

Zero-CL: Instance and Feature decorrelation for negative-free symmetric contrastive learning

no code implementations ICLR 2022 Shaofeng Zhang, Feng Zhu, Junchi Yan, Rui Zhao, Xiaokang Yang

The proposed two methods (FCL, ICL) can be combined synthetically, called Zero-CL, where ``Zero'' means negative samples are \textbf{zero} relevant, which allows Zero-CL to completely discard negative pairs i. e., with \textbf{zero} negative samples.

Contrastive Learning

Self-supervised representation learning via adaptive hard-positive mining

no code implementations1 Jan 2021 Shaofeng Zhang, Junchi Yan, Xiaokang Yang

Despite their success in perception over the last decade, deep neural networks are also known ravenous to labeled data for training, which limits their applicability to real-world problems.

Contrastive Learning Representation Learning +1

The Diversified Ensemble Neural Network

no code implementations NeurIPS 2020 Shaofeng Zhang, Meng Liu, Junchi Yan

Ensemble is a general way of improving the accuracy and stability of learning models, especially for the generalization ability on small datasets.

Cannot find the paper you are looking for? You can Submit a new open access paper.