1 code implementation • 17 Apr 2024 • Xin Li, Kun Yuan, Yajing Pei, Yiting Lu, Ming Sun, Chao Zhou, Zhibo Chen, Radu Timofte, Wei Sun, HaoNing Wu, ZiCheng Zhang, Jun Jia, Zhichao Zhang, Linhan Cao, Qiubo Chen, Xiongkuo Min, Weisi Lin, Guangtao Zhai, Jianhui Sun, Tianyi Wang, Lei LI, Han Kong, Wenxuan Wang, Bing Li, Cheng Luo, Haiqiang Wang, Xiangguang Chen, Wenhui Meng, Xiang Pan, Huiying Shi, Han Zhu, Xiaozhong Xu, Lei Sun, Zhenzhong Chen, Shan Liu, Fangyuan Kong, Haotian Fan, Yifang Xu, Haoran Xu, Mengduo Yang, Jie zhou, Jiaze Li, Shijie Wen, Mai Xu, Da Li, Shunyu Yao, Jiazhi Du, WangMeng Zuo, Zhibo Li, Shuai He, Anlong Ming, Huiyuan Fu, Huadong Ma, Yong Wu, Fie Xue, Guozhi Zhao, Lina Du, Jie Guo, Yu Zhang, huimin zheng, JunHao Chen, Yue Liu, Dulan Zhou, Kele Xu, Qisheng Xu, Tao Sun, Zhixiang Ding, Yuhang Hu
This paper reviews the NTIRE 2024 Challenge on Shortform UGC Video Quality Assessment (S-UGC VQA), where various excellent solutions are submitted and evaluated on the collected dataset KVQ from popular short-form video platform, i. e., Kuaishou/Kwai Platform.
1 code implementation • 5 Apr 2024 • JunHao Chen, Xiang Li, Xiaojun Ye, Chao Li, Zhaoxin Fan, Hao Zhao
The definition of an IDEA is the composition of multimodal inputs including text, image, and 3D models.
1 code implementation • 24 Mar 2024 • Xiaojun Hou, Jiazheng Xing, Yijie Qian, Yaowei Guo, Shuo Xin, JunHao Chen, Kai Tang, Mengmeng Wang, Zhengkai Jiang, Liang Liu, Yong liu
Multimodal Visual Object Tracking (VOT) has recently gained significant attention due to its robustness.
no code implementations • 18 Mar 2024 • Mingjin Chen, JunHao Chen, Xiaojun Ye, Huan-ang Gao, Xiaoxue Chen, Zhaoxin Fan, Hao Zhao
In this paper, we propose a new method called \emph{Ultraman} for fast reconstruction of textured 3D human models from a single image.
1 code implementation • 21 Feb 2024 • Xinrong Zhang, Yingfa Chen, Shengding Hu, Zihang Xu, JunHao Chen, Moo Khai Hao, Xu Han, Zhen Leng Thai, Shuo Wang, Zhiyuan Liu, Maosong Sun
Processing and reasoning over long contexts is crucial for many practical applications of Large Language Models (LLMs), such as document comprehension and agent construction.
1 code implementation • 12 Dec 2023 • Jingyang Xiang, Siqi Li, JunHao Chen, Zhuangzhi Chen, Tianxin Huang, Linpeng Peng, Yong liu
Meanwhile, a sparsity strategy that gradually increases the percentage of N:M weight blocks is applied, which allows the network to heal from the pruning-induced damage progressively.
1 code implementation • 22 Nov 2023 • JunHao Chen, Peng Rong, Jingbo Sun, Chao Li, Xiang Li, Hongwu Lv
We introduce a large language model to parse the text and identify stylization goals and specific styles.
1 code implementation • 23 Sep 2023 • Xiang Li, JunHao Chen, Chao Li, Hongwu Lv
Audio recognition in specialized areas such as birdsong and submarine acoustics faces challenges in large-scale pre-training due to the limitations in available samples imposed by sampling environments and specificity requirements.
no code implementations • 28 Aug 2023 • Baoli Zhang, Haining Xie, Pengfan Du, JunHao Chen, Pengfei Cao, Yubo Chen, Shengping Liu, Kang Liu, Jun Zhao
To this end, we propose the ZhuJiu benchmark, which has the following strengths: (1) Multi-dimensional ability coverage: We comprehensively evaluate LLMs across 7 ability dimensions covering 51 tasks.
1 code implementation • 23 Aug 2023 • Feiyu Zhang, Liangzhi Li, JunHao Chen, Zhouqiang Jiang, Bowen Wang, Yiming Qian
This approach is different from the pruning method as it is not limited by the initial number of training parameters, and each parameter matrix has a higher rank upper bound for the same training overhead.
no code implementations • 9 Apr 2023 • JunHao Chen, Xueli wang
In this article, we strengthen the proof methods of some previously weakly consistent variants of random forests into strongly consistent proof methods, and improve the data utilization of these variants, in order to obtain better theoretical properties and experimental performance.
no code implementations • 28 Nov 2022 • JunHao Chen
In this paper, we modify the proof methods of some previously weakly consistent variants of random forests into strongly consistent proof methods, and improve the data utilization of these variants in order to obtain better theoretical properties and experimental performance.