Search Results for author: Bingxuan Wang

Found 5 papers, 3 papers with code

Complementing Event Streams and RGB Frames for Hand Mesh Reconstruction

no code implementations12 Mar 2024 Jianping Jiang, Xinyu Zhou, Bingxuan Wang, Xiaoming Deng, Chao Xu, Boxin Shi

Experiments on real-world data demonstrate that EvRGBHand can effectively solve the challenging issues when using either type of camera alone via retaining the merits of both, and shows the potential of generalization to outdoor scenes and another type of event camera.

DeepSeek-VL: Towards Real-World Vision-Language Understanding

2 code implementations8 Mar 2024 Haoyu Lu, Wen Liu, Bo Zhang, Bingxuan Wang, Kai Dong, Bo Liu, Jingxiang Sun, Tongzheng Ren, Zhuoshu Li, Hao Yang, Yaofeng Sun, Chengqi Deng, Hanwei Xu, Zhenda Xie, Chong Ruan

The DeepSeek-VL family (both 1. 3B and 7B models) showcases superior user experiences as a vision-language chatbot in real-world applications, achieving state-of-the-art or competitive performance across a wide range of visual-language benchmarks at the same model size while maintaining robust performance on language-centric benchmarks.

Chatbot Language Modelling +3

Differentiable Feature Aggregation Search for Knowledge Distillation

no code implementations ECCV 2020 Yushuo Guan, Pengyu Zhao, Bingxuan Wang, Yuanxing Zhang, Cong Yao, Kaigui Bian, Jian Tang

To tackle with both the efficiency and the effectiveness of knowledge distillation, we introduce the feature aggregation to imitate the multi-teacher distillation in the single-teacher distillation framework by extracting informative supervision from multiple teacher feature maps.

Knowledge Distillation Model Compression +1

Rethinking Irregular Scene Text Recognition

1 code implementation30 Aug 2019 Shangbang Long, Yushuo Guan, Bingxuan Wang, Kaigui Bian, Cong Yao

Reading text from natural images is challenging due to the great variety in text font, color, size, complex background and etc..

Scene Text Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.