no code implementations • 15 Aug 2024 • Chen Zeng, Jiahui Wang, Haoran Shen, Qiao Wang
For some types of functions, MLP outperforms or performs comparably to KAN.
no code implementations • 20 Jul 2024 • Haoran Shen, Chen Zeng, Jiahui Wang, Qiao Wang
noise with any fixed SNR, when we increase the amount of training data by a factor of $r$, the test-loss (RMSE) of KANs will exhibit a performance trend like $\text{test-loss} \sim \mathcal{O}(r^{-\frac{1}{2}})$ as $r\to +\infty$.
2 code implementations • 22 Sep 2023 • Xirong Cao, Xiang Li, Divyesh Jadav, Yanzhao Wu, Zhehui Chen, Chen Zeng, Wenqi Wei
Diffusion models have gained prominence in the image domain for their capabilities in data generation and transformation, achieving state-of-the-art performance in various tasks in both image and audio domains.
1 code implementation • 28 Aug 2023 • Guanting Dong, Zechen Wang, Jinxu Zhao, Gang Zhao, Daichi Guo, Dayuan Fu, Tingfeng Hui, Chen Zeng, Keqing He, Xuefeng Li, LiWen Wang, Xinyue Cui, Weiran Xu
The objective of few-shot named entity recognition is to identify named entities with limited labeled instances.
Ranked #1 on Few-shot NER on Few-NERD (INTER)
1 code implementation • 17 Jun 2023 • Weihao Zeng, Keqing He, Yejie Wang, Chen Zeng, Jingang Wang, Yunsen Xian, Weiran Xu
Pre-trained language models based on general text enable huge success in the NLP scenario.
1 code implementation • 28 May 2023 • Yutao Mou, Xiaoshuai Song, Keqing He, Chen Zeng, Pei Wang, Jingang Wang, Yunsen Xian, Weiran Xu
Previous methods suffer from a coupling of pseudo label disambiguation and representation learning, that is, the reliability of pseudo labels relies on representation learning, and representation learning is restricted by pseudo labels in turn.
no code implementations • 27 Feb 2023 • Guanting Dong, Zechen Wang, LiWen Wang, Daichi Guo, Dayuan Fu, Yuxiang Wu, Chen Zeng, Xuefeng Li, Tingfeng Hui, Keqing He, Xinyue Cui, QiXiang Gao, Weiran Xu
Specifically, we decouple class-specific prototypes and contextual semantic prototypes by two masking strategies to lead the model to focus on two different semantic information for inference.
no code implementations • 27 Feb 2023 • Daichi Guo, Guanting Dong, Dayuan Fu, Yuxiang Wu, Chen Zeng, Tingfeng Hui, LiWen Wang, Xuefeng Li, Zechen Wang, Keqing He, Xinyue Cui, Weiran Xu
In real dialogue scenarios, the existing slot filling model, which tends to memorize entity patterns, has a significantly reduced generalization facing Out-of-Vocabulary (OOV) problems.
1 code implementation • 13 Feb 2023 • Yiren Jian, Chongyang Gao, Chen Zeng, Yunjie Zhao, Soroush Vosoughi
Our findings indicate that the learned structural patterns of proteins can be transferred to RNAs, opening up potential new avenues for research.
no code implementations • COLING 2022 • Guanting Dong, Daichi Guo, LiWen Wang, Xuefeng Li, Zechen Wang, Chen Zeng, Keqing He, Jinzheng Zhao, Hao Lei, Xinyue Cui, Yi Huang, Junlan Feng, Weiran Xu
Most existing slot filling models tend to memorize inherent patterns of entities and corresponding contexts from training data.
no code implementations • 11 Jan 2022 • Chen Zeng, Grant Hecht, Prajit KrisshnaKumar, Raj K. Shah, Souma Chowdhury, Eleonora M. Botta
This tether-net system is subject to several sources of uncertainty in sensing and actuation that affect the performance of its net launch and closing control.
1 code implementation • 24 Mar 2021 • Chen Zeng, Yue Yu, Shanshan Li, Xin Xia, Zhiming Wang, Mingyang Geng, Bailin Xiao, Wei Dong, Xiangke Liao
With the rapid increase in the amount of public code repositories, developers maintain a great desire to retrieve precise code snippets by using natural language.
no code implementations • 8 Dec 2020 • Zhibo Zhang, Chen Zeng, Maulikkumar Dhameliya, Souma Chowdhury, Rahul Rai
The thermal data is processed through a thresholding and Kalman filter approach to detect and track the bounding box.