Search Results for author: Chaoqun Wang

Found 18 papers, 7 papers with code

Talaria: Interactively Optimizing Machine Learning Models for Efficient Inference

no code implementations3 Apr 2024 Fred Hohman, Chaoqun Wang, Jinmook Lee, Jochen Görtler, Dominik Moritz, Jeffrey P Bigham, Zhile Ren, Cecile Foret, Qi Shan, Xiaoyi Zhang

On-device machine learning (ML) moves computation from the cloud to personal devices, protecting user privacy and enabling intelligent user experiences.

Toward Accurate Camera-based 3D Object Detection via Cascade Depth Estimation and Calibration

no code implementations7 Feb 2024 Chaoqun Wang, Yiran Qin, Zijian Kang, Ningning Ma, Ruimao Zhang

First, a depth estimation (DE) scheme leverages relative depth information to realize the effective feature lifting from 2D to 3D spaces.

3D Object Detection Denoising +6

The Causal Impact of Credit Lines on Spending Distributions

1 code implementation16 Dec 2023 Yijun Li, Cheuk Hang Leung, Xiangqian Sun, Chaoqun Wang, Yiyan Huang, Xing Yan, Qi Wu, Dongdong Wang, Zhixiang Huang

Consumer credit services offered by e-commerce platforms provide customers with convenient loan access during shopping and have the potential to stimulate sales.

DocStormer: Revitalizing Multi-Degraded Colored Document Images to Pristine PDF

no code implementations27 Oct 2023 Chaowei Liu, Jichun Li, Yihua Teng, Chaoqun Wang, Nuo Xu, Jihao Wu, Dandan Tu

Thus, we propose DocStormer, a novel algorithm designed to restore multi-degraded colored documents to their potential pristine PDF.

Binarization

SupFusion: Supervised LiDAR-Camera Fusion for 3D Object Detection

1 code implementation ICCV 2023 Yiran Qin, Chaoqun Wang, Zijian Kang, Ningning Ma, Zhen Li, Ruimao Zhang

In this paper, we propose a novel training strategy called SupFusion, which provides an auxiliary feature level supervision for effective LiDAR-Camera fusion and significantly boosts detection performance.

3D Object Detection object-detection

DeLELSTM: Decomposition-based Linear Explainable LSTM to Capture Instantaneous and Long-term Effects in Time Series

no code implementations26 Aug 2023 Chaoqun Wang, Yijun Li, Xiangqian Sun, Qi Wu, Dongdong Wang, Zhixiang Huang

The tensorized LSTM assigns each variable with a unique hidden state making up a matrix $\mathbf{h}_t$, and the standard LSTM models all the variables with a shared hidden state $\mathbf{H}_t$.

Time Series Time Series Forecasting

Region-Enhanced Feature Learning for Scene Semantic Segmentation

no code implementations15 Apr 2023 Xin Kang, Chaoqun Wang, Xuejin Chen

We design a region-based feature enhancement (RFE) module, which consists of a Semantic-Spatial Region Extraction stage and a Region Dependency Modeling stage.

Segmentation Semantic Segmentation

Semantic Human Parsing via Scalable Semantic Transfer over Multiple Label Domains

no code implementations CVPR 2023 Jie Yang, Chaoqun Wang, Zhen Li, Junle Wang, Ruimao Zhang

This paper presents Scalable Semantic Transfer (SST), a novel training paradigm, to explore how to leverage the mutual benefits of the data from different label domains (i. e. various levels of label granularity) to train a powerful human parsing network.

Human Parsing Representation Learning

Generalized 3D Self-supervised Learning Framework via Prompted Foreground-Aware Feature Contrast

1 code implementation CVPR 2023 Kangcheng Liu, Xinhu Zheng, Chaoqun Wang, Kai Tang, Ming Liu, Baoquan Chen

The second is that we prevent over-discrimination between 3D segments/objects and encourage grouped foreground-to-background distinctions at the segment level with adaptive feature learning in a Siamese correspondence network, which adaptively learns feature correlations within and across point cloud views effectively.

3D Semantic Segmentation Contrastive Learning +8

Semantics-Preserving Sketch Embedding for Face Generation

no code implementations23 Nov 2022 Binxin Yang, Xuejin Chen, Chaoqun Wang, Chi Zhang, Zihan Chen, Xiaoyan Sun

With a semantic feature matching loss for effective semantic supervision, our sketch embedding precisely conveys the semantics in the input sketches to the synthesized images.

Face Generation Image-to-Image Translation

Dual Progressive Prototype Network for Generalized Zero-Shot Learning

no code implementations NeurIPS 2021 Chaoqun Wang, Shaobo Min, Xuejin Chen, Xiaoyan Sun, Houqiang Li

This enables DPPN to produce visual representations with accurate attribute localization ability, which benefits the semantic-visual alignment and representation transferability.

Attribute Generalized Zero-Shot Learning

Text-Aware Single Image Specular Highlight Removal

1 code implementation PRCV 2021 Shiyu Hou, Chaoqun Wang, Weize Quan, Jingen Jiang, Dong-Ming Yan

The core goal is to improve the accuracy of text detection and recognition by removing the highlight from text images.

Highlight Detection highlight removal +1

Task-Independent Knowledge Makes for Transferable Representations for Generalized Zero-Shot Learning

no code implementations5 Apr 2021 Chaoqun Wang, Xuejin Chen, Shaobo Min, Xiaoyan Sun, Houqiang Li

First, DCEN leverages task labels to cluster representations of the same semantic category by cross-modal contrastive learning and exploring semantic-visual complementarity.

Contrastive Learning Generalized Zero-Shot Learning

Scene text removal via cascaded text stroke detection and erasing

1 code implementation19 Nov 2020 Xuewei Bian, Chaoqun Wang, Weize Quan, Juntao Ye, Xiaopeng Zhang, Dong-Ming Yan

Specifically, we decouple the text removal problem into text stroke detection and stroke removal.

Cross-Modal Pattern-Propagation for RGB-T Tracking

no code implementations CVPR 2020 Chaoqun Wang, Chunyan Xu, Zhen Cui, Ling Zhou, Tong Zhang, Xiaoya Zhang, Jian Yang

Motivated by our observations on RGB-T data that pattern correlations are high-frequently recurred across modalities also along sequence frames, in this paper, we propose a cross-modal pattern-propagation (CMPP) tracking framework to diffuse instance patterns across RGB-T data on spatial domain as well as temporal domain.

Object Tracking Rgb-T Tracking

Domain-aware Visual Bias Eliminating for Generalized Zero-Shot Learning

1 code implementation CVPR 2020 Shaobo Min, Hantao Yao, Hongtao Xie, Chaoqun Wang, Zheng-Jun Zha, Yongdong Zhang

Recent methods focus on learning a unified semantic-aligned visual representation to transfer knowledge between two domains, while ignoring the effect of semantic-free visual representation in alleviating the biased recognition problem.

Generalized Zero-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.