Search Results for author: Chenglin Yang

Found 9 papers, 5 papers with code

IG Captioner: Information Gain Captioners are Strong Zero-shot Classifiers

no code implementations27 Nov 2023 Chenglin Yang, Siyuan Qiao, Yuan Cao, Yu Zhang, Tao Zhu, Alan Yuille, Jiahui Yu

To tackle this problem, we redesign the scoring objective for the captioner to alleviate the distributional bias and focus on measuring the gain of information brought by the visual inputs.

Caption Generation Language Modelling +2

Lite Vision Transformer with Enhanced Self-Attention

1 code implementation CVPR 2022 Chenglin Yang, Yilin Wang, Jianming Zhang, He Zhang, Zijun Wei, Zhe Lin, Alan Yuille

We propose Lite Vision Transformer (LVT), a novel light-weight transformer network with two enhanced self-attention mechanisms to improve the model performances for mobile deployment.

Panoptic Segmentation Segmentation

Meticulous Object Segmentation

1 code implementation13 Dec 2020 Chenglin Yang, Yilin Wang, Jianming Zhang, He Zhang, Zhe Lin, Alan Yuille

To evaluate segmentation quality near object boundaries, we propose the Meticulosity Quality (MQ) score considering both the mask coverage and boundary precision.

2k 4k +4

Robustness Out of the Box: Compositional Representations Naturally Defend Against Black-Box Patch Attacks

no code implementations1 Dec 2020 Christian Cosgrove, Adam Kortylewski, Chenglin Yang, Alan Yuille

Second, we find that compositional deep networks, which have part-based representations that lead to innate robustness to natural occlusion, are robust to patch attacks on PASCAL3D+ and the German Traffic Sign Recognition Benchmark, without adversarial training.

Traffic Sign Recognition

Snapshot Distillation: Teacher-Student Optimization in One Generation

no code implementations CVPR 2019 Chenglin Yang, Lingxi Xie, Chi Su, Alan L. Yuille

Optimizing a deep neural network is a fundamental task in computer vision, yet direct training methods often suffer from over-fitting.

Image Classification object-detection +2

Cannot find the paper you are looking for? You can Submit a new open access paper.