no code implementations • 6 Apr 2018 • Yonghong Tian, Zeyu Li, Zhiwei Xu, Xuying Meng, Bing Zheng
Recently, the online car-hailing service, Didi, has emerged as a leader in the sharing economy.
1 code implementation • Findings (NAACL) 2022 • Yiyi Liu, Yequan Wang, Aixin Sun, Xuying Meng, Jing Li, Jiafeng Guo
Based on this dual-channel framework, we design the Dual-Channel Network~(DC-Net) to recognize sentiment conflict.
no code implementations • 23 Mar 2022 • Yequan Wang, Xuying Meng, Yiyi Liu, Aixin Sun, Yao Wang, Yinhe Zheng, Minlie Huang
These models hence are not optimized for dialog-level emotion detection, i. e. to predict the emotion category of a dialog as a whole.
1 code implementation • COLING 2022 • Yequan Wang, Xiang Li, Aixin Sun, Xuying Meng, Huaming Liao, Jiafeng Guo
CofeNet is able to extract complicated quotations with components of variable lengths and complicated structures.
no code implementations • 12 Oct 2022 • Yequan Wang, Jiawen Deng, Aixin Sun, Xuying Meng
Recently, amounts of works utilize perplexity~(PPL) to evaluate the quality of the generated text.
no code implementations • 23 Oct 2022 • Xiaohan Xu, Xuying Meng, Yequan Wang
Further experiments prove that abundant prior knowledge is conducive to high-quality emotional support, and a well-learned latent variable is critical to the diversity of generations.
no code implementations • 15 Mar 2023 • Yequan Wang, Hengran Zhang, Aixin Sun, Xuying Meng
Given comparative text, comparative relation extraction aims to extract two targets (\eg two cameras) in comparison and the aspect they are compared for (\eg image quality).
1 code implementation • 14 Apr 2023 • Yiqun Yao, Siqi Fan, Xiusheng Huang, Xuezhi Fang, Xiang Li, Ziyi Ni, Xin Jiang, Xuying Meng, Peng Han, Shuo Shang, Kang Liu, Aixin Sun, Yequan Wang
With around 14% of the one-time pre-training cost, we can accurately forecast the loss for models up to 52B.
no code implementations • 19 Apr 2023 • Xuying Meng, Chungang Lin, Yequan Wang, Yujun Zhang
Pretrained models for network traffic can utilize large-scale raw data to learn the essential characteristics of network traffic, and generate distinguishable results for input traffic without considering specific downstream tasks.
no code implementations • 2 May 2023 • Xiang Li, Xin Jiang, Xuying Meng, Aixin Sun, Yequan Wang
FreeLM outperforms large models e. g., GPT-3 and InstructGPT, on a range of language understanding tasks in experiments.
no code implementations • 7 Sep 2023 • Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
We demonstrate that a 101B-parameter LLM with 0. 31T tokens can be trained with a budget of 100K US dollars.
no code implementations • 18 Sep 2023 • Jiatai Wang, Zhiwei Xu, Xuewen Yang, Hailong Li, Bo Li, Xuying Meng
However, as contrastive learning continues to evolve within the field of computer vision, self-supervised learning has also made substantial research progress and is progressively becoming dominant in MVC methods.
no code implementations • 4 Jan 2024 • Haitong Luo, Xuying Meng, Suhang Wang, Hanyun Cao, Weiyao Zhang, Yequan Wang, Yujun Zhang
In this study, we present a novel approach called Spectral-based Complementary Graph Neural Networks (SComGNN) that utilizes the spectral properties of complementary item graphs.
no code implementations • 4 Mar 2024 • Siqi Fan, Xin Jiang, Xiang Li, Xuying Meng, Peng Han, Shuo Shang, Aixin Sun, Yequan Wang, Zhongyuan Wang
To answer this question, we first indicate that Not all Layers are Necessary during Inference by statistically analyzing the activated layers across tasks.