Search Results for author: Xuran Pan

Found 7 papers, 5 papers with code

ActiveNeRF: Learning where to See with Uncertainty Estimation

no code implementations18 Sep 2022 Xuran Pan, Zihang Lai, Shiji Song, Gao Huang

In this paper, we present a novel learning framework, ActiveNeRF, aiming to model a 3D scene with a constrained input budget.

Active Learning Novel View Synthesis

Vision Transformer with Deformable Attention

2 code implementations CVPR 2022 Zhuofan Xia, Xuran Pan, Shiji Song, Li Erran Li, Gao Huang

On the one hand, using dense attention e. g., in ViT, leads to excessive memory and computational cost, and features can be influenced by irrelevant parts which are beyond the region of interests.

Ranked #2 on Object Detection on COCO test-dev (AP metric)

Image Classification Object Detection +1

On the Integration of Self-Attention and Convolution

1 code implementation CVPR 2022 Xuran Pan, Chunjiang Ge, Rui Lu, Shiji Song, Guanfu Chen, Zeyi Huang, Gao Huang

In this paper, we show that there exists a strong underlying relation between them, in the sense that the bulk of computations of these two paradigms are in fact done with the same operation.

Representation Learning

A Unified Framework for Convolution-based Graph Neural Networks

no code implementations1 Jan 2021 Xuran Pan, Shiji Song, Gao Huang

In this paper, we take a step forward to establish a unified framework for convolution-based graph neural networks, by formulating the basic graph convolution operation as an optimization problem in the graph Fourier space.

3D Object Detection with Pointformer

1 code implementation CVPR 2021 Xuran Pan, Zhuofan Xia, Shiji Song, Li Erran Li, Gao Huang

In this paper, we propose Pointformer, a Transformer backbone designed for 3D point clouds to learn features effectively.

3D Object Detection object-detection +1

Regularizing Deep Networks with Semantic Data Augmentation

1 code implementation21 Jul 2020 Yulin Wang, Gao Huang, Shiji Song, Xuran Pan, Yitong Xia, Cheng Wu

The proposed method is inspired by the intriguing property that deep networks are effective in learning linearized features, i. e., certain directions in the deep feature space correspond to meaningful semantic transformations, e. g., changing the background or view angle of an object.

Data Augmentation

Implicit Semantic Data Augmentation for Deep Networks

1 code implementation NeurIPS 2019 Yulin Wang, Xuran Pan, Shiji Song, Hong Zhang, Cheng Wu, Gao Huang

Our work is motivated by the intriguing property that deep networks are surprisingly good at linearizing features, such that certain directions in the deep feature space correspond to meaningful semantic transformations, e. g., adding sunglasses or changing backgrounds.

Image Augmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.