Search Results for author: Yihang Chen

Found 6 papers, 4 papers with code

HAC: Hash-grid Assisted Context for 3D Gaussian Splatting Compression

1 code implementation21 Mar 2024 Yihang Chen, Qianyi Wu, Jianfei Cai, Mehrtash Harandi, Weiyao Lin

3D Gaussian Splatting (3DGS) has emerged as a promising framework for novel view synthesis, boasting rapid rendering speed with high fidelity.

Attribute Novel View Synthesis +1

Generalization of Scaled Deep ResNets in the Mean-Field Regime

no code implementations14 Mar 2024 Yihang Chen, Fanghui Liu, Yiping Lu, Grigorios G. Chrysos, Volkan Cevher

To derive the generalization bounds under this setting, our analysis necessitates a shift from the conventional time-invariant Gram matrix employed in the lazy training regime to a time-variant, distribution-dependent version.

Generalization Bounds

Swift Parameter-free Attention Network for Efficient Super-Resolution

1 code implementation21 Nov 2023 Cheng Wan, Hongyuan Yu, Zhiqi Li, Yihang Chen, Yajun Zou, Yuqing Liu, Xuanwu Yin, Kunlong Zuo

To address this issue, we propose the Swift Parameter-free Attention Network (SPAN), a highly efficient SISR model that balances parameter count, inference speed, and image quality.

Image Super-Resolution

Order-Preserving GFlowNets

1 code implementation30 Sep 2023 Yihang Chen, Lukas Mauch

To address these issues, we propose Order-Preserving GFlowNets (OP-GFNs), which sample with probabilities in proportion to a learned reward function that is consistent with a provided (partial) order on the candidates, thus eliminating the need for an explicit formulation of the reward function.

Neural Architecture Search

A Rate-Distortion Approach to Domain Generalization

no code implementations29 Sep 2021 Yihang Chen, Grigorios Chrysos, Volkan Cevher

Domain generalization deals with the difference in the distribution between the training and testing datasets, i. e., the domain shift problem, by extracting domain-invariant features.

Contrastive Learning Domain Generalization

Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot

1 code implementation NeurIPS 2020 Jingtong Su, Yihang Chen, Tianle Cai, Tianhao Wu, Ruiqi Gao, Li-Wei Wang, Jason D. Lee

In this paper, we conduct sanity checks for the above beliefs on several recent unstructured pruning methods and surprisingly find that: (1) A set of methods which aims to find good subnetworks of the randomly-initialized network (which we call "initial tickets"), hardly exploits any information from the training data; (2) For the pruned networks obtained by these methods, randomly changing the preserved weights in each layer, while keeping the total number of preserved weights unchanged per layer, does not affect the final performance.

Network Pruning

Cannot find the paper you are looking for? You can Submit a new open access paper.