no code implementations • 30 May 2024 • Jianghao Shen, Nan Xue, Tianfu Wu
Child 3D Gaussians are learned via a lightweight Multi-Layer Perceptron (MLP) which takes as input the projected image features of a parent 3D Gaussian and the embedding of a target camera view.
1 code implementation • 29 Dec 2021 • Jianghao Shen, Tianfu Wu
For image recognition tasks, the proposed SASE is used as a drop-in replacement for convolution layers in ResNets and achieves much better accuracy than the vanilla ResNets, and slightly better than the MHSA counterparts such as the Swin-Transformer and Pyramid-Transformer in the ImageNet-1000 dataset, with significantly smaller models.
no code implementations • 29 Dec 2020 • Jianghao Shen, Sicheng Wang, Zhangyang Wang
For example, our model with only 1 layer of 15 trees can perform comparably with the model in [3] with 2 layers of 2000 trees each.
1 code implementation • 3 Jan 2020 • Jianghao Shen, Yonggan Fu, Yue Wang, Pengfei Xu, Zhangyang Wang, Yingyan Lin
The core idea of DFS is to hypothesize layer-wise quantization (to different bitwidths) as intermediate "soft" choices to be made between fully utilizing and skipping a layer.
no code implementations • 10 Jul 2019 • Yue Wang, Jianghao Shen, Ting-Kuei Hu, Pengfei Xu, Tan Nguyen, Richard Baraniuk, Zhangyang Wang, Yingyan Lin
State-of-the-art convolutional neural networks (CNNs) yield record-breaking predictive performance, yet at the cost of high-energy-consumption inference, that prohibits their widely deployments in resource-constrained Internet of Things (IoT) applications.