1 code implementation • 16 Aug 2024 • Chao Zeng, Songwei Liu, Yusheng Xie, Hong Liu, Xiaojian Wang, Miao Wei, Shu Yang, Fangmin Chen, Xing Mei
Based on W2*A8 quantization configuration on LLaMA-7B model, it achieved a WikiText2 perplexity of 7. 59 (2. 17$\downarrow $ vs 9. 76 in AffineQuant).
no code implementations • 13 Aug 2024 • Chenqian Yan, Songwei Liu, Hongjian Liu, Xurui Peng, Xiaojian Wang, Fangmin Chen, Lean Fu, Xing Mei
On the flip side, while there are many compact models tailored for edge devices that can reduce these demands, they often compromise on semantic integrity and visual quality when compared to full-sized SDMs.
no code implementations • 1 Jul 2024 • Songwei Liu, Chao Zeng, Lianqiang Li, Chenqian Yan, Lean Fu, Xing Mei, Fangmin Chen
Based on this observation, we propose an efficient model volume compression strategy, termed FoldGPT, which combines block removal and block parameter sharing. This strategy consists of three parts: (1) Based on the learnable gating parameters, we determine the block importance ranking while modeling the coupling effect between blocks.
no code implementations • CVPR 2023 • Han Yan, Celong Liu, Chao Ma, Xing Mei
VDB takes both the advantages of sparse and dense volumes for compact data representation and efficient data access, being a promising data structure for NeRF data interpolation and ray marching.
no code implementations • 30 Mar 2021 • Yifan Wang, Linjie Luo, Xiaohui Shen, Xing Mei
Recently, significant progress has been made in single-view depth estimation thanks to increasingly large and diverse depth datasets.
no code implementations • 4 Dec 2020 • Zhiyong Huang, Kekai Sheng, WeiMing Dong, Xing Mei, Chongyang Ma, Feiyue Huang, Dengwen Zhou, Changsheng Xu
For intra-domain propagation, we propose an effective self-training strategy to mitigate the noises in pseudo-labeled target domain data and improve the feature discriminability in the target domain.
1 code implementation • 15 May 2019 • Huaiyu Li, Wei-Ming Dong, Xing Mei, Chongyang Ma, Feiyue Huang, Bao-Gang Hu
The TargetNet module is a neural network for solving a specific task and the MetaNet module aims at learning to generate functional weights for TargetNet by observing training samples.
1 code implementation • ACM Multimedia Conference 2018 • Kekai Sheng, Wei-Ming Dong, Chongyang Ma, Xing Mei, Feiyue Huang, Bao-Gang Hu
Aggregation structures with explicit information, such as image attributes and scene semantics, are effective and popular for intelligent systems for assessing aesthetics of visual data.
Ranked #1 on Aesthetics Quality Assessment on AVA
no code implementations • ICCV 2015 • Xing Mei, Honggang Qi, Bao-Gang Hu, Siwei Lyu
In this work, we describe an effective and efficient approach to incorporate the knowledge of distinct pixel values of the pristine images into the general regularized least squares restoration framework.
no code implementations • CVPR 2015 • Xing Mei, Wei-Ming Dong, Bao-Gang Hu, Siwei Lyu
Marginal histograms provide valuable information for various computer vision problems.
no code implementations • 26 Mar 2014 • Weiming Dong, Fuzhang Wu, Yan Kong, Xing Mei, Tong-Yee Lee, Xiaopeng Zhang
We propose to retarget the textural regions by content-aware synthesis and non-textural regions by fast multi-operators.
no code implementations • 19 Feb 2014 • Chun-Guo Li, Xing Mei, Bao-Gang Hu
In this work, we focus on unsupervised ranking from multi-attribute data which is also common in evaluation tasks.
no code implementations • CVPR 2013 • Xing Mei, Xun Sun, Wei-Ming Dong, Haitao Wang, Xiaopeng Zhang
Instead of employing the minimum spanning tree (MST) and its variants, a new tree structure, "Segment-Tree", is proposed for non-local matching cost aggregation.
no code implementations • CVPR 2013 • Weiming Li, Haitao Wang, Mingcai Zhou, Shandong Wang, Shaohui Jiao, Xing Mei, Tao Hong, Hoyoung Lee, Jiyeun Kim
Based on this, 3D image artifacts are shown to be effectively removed in a test TLA-IID with challenging misalignments.