1 code implementation • 30 Oct 2024 • Ziyao Shangguan, Chuhan Li, Yuxuan Ding, Yanan Zheng, Yilun Zhao, Tesca Fitzgerald, Arman Cohan
Our study of existing benchmarks shows that this capability of MFMs is likely overestimated as many questions can be solved by using a single, few, or out-of-order frames.
1 code implementation • 15 Nov 2022 • Haike Xu, Zongyu Lin, Jing Zhou, Yanan Zheng, Zhilin Yang
In the finetuning setting, our approach also achieves new state-of-the-art results on a wide range of NLP tasks, with only 1/4 parameters of previous methods.
2 code implementations • 9 Nov 2022 • Chonghua Liao, Yanan Zheng, Zhilin Yang
Natural language prompts have been shown to facilitate cross-task generalization for large language models.
1 code implementation • 8 Nov 2022 • Yanru Chen, Yanan Zheng, Zhilin Yang
Few-shot named entity recognition (NER) targets generalizing to unseen labels and/or domains with few labeled examples.
no code implementations • 1 Jul 2022 • Haonan Hu, Yan Jiang, Jiliang Zhang, Yanan Zheng, Qianbin Chen, Jie Zhang
The fog-radio-access-network (F-RAN) has been proposed to address the strict latency requirements, which offloads computation tasks generated in user equipments (UEs) to the edge to reduce the processing latency.
1 code implementation • 7 Nov 2021 • Xingcheng Yao, Yanan Zheng, Xiaocong Yang, Zhilin Yang
Pretrained language models have become the standard approach for many NLP tasks due to strong performance, but they are very expensive to train.
1 code implementation • ACL 2022 • Yanan Zheng, Jing Zhou, Yujie Qian, Ming Ding, Chonghua Liao, Jian Li, Ruslan Salakhutdinov, Jie Tang, Sebastian Ruder, Zhilin Yang
The few-shot natural language understanding (NLU) task has attracted much recent attention.
1 code implementation • ACL 2022 • Jing Zhou, Yanan Zheng, Jie Tang, Jian Li, Zhilin Yang
Most previous methods for text data augmentation are limited to simple tasks and weak baselines.
9 code implementations • 18 Mar 2021 • Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, Jie Tang
Prompting a pretrained language model with natural language patterns has been proved effective for natural language understanding (NLU).
10 code implementations • 1 Dec 2020 • Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun
However, applying GPT-3 to address Chinese NLP tasks is still challenging, as the training corpus of GPT-3 is primarily English, and the parameters are not publicly available.