no code implementations • 28 Oct 2024 • Yiming Cui, Wei-Nan Zhang, Ting Liu
The attention mechanism plays an important role in the machine reading comprehension (MRC) model.
no code implementations • 28 Oct 2024 • Wei-Nan Zhang, Yiming Cui, Kaiyan Zhang, Yifa Wang, Qingfu Zhu, Lingzhi Li, Ting Liu
To address this issue, in this paper, we proposed a static and dynamic attention-based approach to model the dialogue history and then generate open domain multi turn dialogue responses.
2 code implementations • 24 Sep 2024 • Taowen Wang, Yiyang Liu, James Chenhao Liang, Junhan Zhao, Yiming Cui, Yuning Mao, Shaoliang Nie, Jiahao Liu, Fuli Feng, Zenglin Xu, Cheng Han, Lifu Huang, Qifan Wang, Dongfang Liu
Instruction tuning has emerged as an effective strategy for achieving zero-shot generalization by finetuning pretrained models on diverse multimodal tasks.
1 code implementation • 19 Sep 2024 • Shiyu Fang, Jiaqi Liu, Mingyu Ding, Yiming Cui, Chen Lv, Peng Hang, Jian Sun
At present, Connected Autonomous Vehicles (CAVs) have begun to open road testing around the world, but their safety and efficiency performance in complex scenarios is still not satisfactory.
no code implementations • 12 Sep 2024 • Yiming Cui, Jiajia Guo, Chao-Kai Wen, Shi Jin
In the realm of reconfigurable intelligent surface (RIS)-assisted communication systems, the connection between a base station (BS) and user equipment (UE) is formed by a cascaded channel, merging the BS-RIS and RIS-UE channels.
no code implementations • 12 Jul 2024 • Jinglong Gao, Xiao Ding, Yiming Cui, Jianbai Zhao, Hepeng Wang, Ting Liu, Bing Qin
To improve the performance of large language models (LLMs), researchers have explored providing LLMs with textual task-solving experience via prompts.
no code implementations • 8 Jul 2024 • Fan Qi, Jiajia Guo, Yiming Cui, Xiangyi Li, Chao-Kai Wen, Shi Jin
In Wi-Fi systems, channel state information (CSI) plays a crucial role in enabling access points to execute beamforming operations.
no code implementations • CVPR 2024 • Yawen Lu, Dongfang Liu, Qifan Wang, Cheng Han, Yiming Cui, Zhiwen Cao, Xueling Zhang, Yingjie Victor Chen, Heng Fan
We capitalize on a dual mechanism involving the feature denoiser and the prototypical learner to decipher the intricacies of motion.
no code implementations • 29 May 2024 • Yiming Cui, Cheng Han, Dongfang Liu
Spatial global-local aggregation fuses the local information from the neighboring frames and global semantics from the current frame to eliminate the feature degradation; 3).
3 code implementations • 4 Mar 2024 • Yiming Cui, Xin Yao
Mixtral, a representative sparse mixture of experts (SMoE) language model, has received significant attention due to its unique model design and superior performance.
1 code implementation • 23 Jan 2024 • Cheng Han, Qifan Wang, Yiming Cui, Wenguan Wang, Lifu Huang, Siyuan Qi, Dongfang Liu
As the scale of vision models continues to grow, the emergence of Visual Prompt Tuning (VPT) as a parameter-efficient transfer learning technique has gained attention due to its superior performance compared to traditional full-finetuning.
no code implementations • 2 Nov 2023 • Yiming Cui, Cheng Han, Dongfang Liu
The advancement of computer vision has pushed visual analysis tasks from still images to the video domain.
1 code implementation • 22 Sep 2023 • James C. Liang, Yiming Cui, Qifan Wang, Tong Geng, Wenguan Wang, Dongfang Liu
This paper presents CLUSTERFORMER, a universal vision model that is based on the CLUSTERing paradigm with TransFORMER.
1 code implementation • ICCV 2023 • Cheng Han, Qifan Wang, Yiming Cui, Zhiwen Cao, Wenguan Wang, Siyuan Qi, Dongfang Liu
Specifically, we introduce a set of learnable key-value prompts and visual prompts into self-attention and input layers, respectively, to improve the effectiveness of model fine-tuning.
1 code implementation • 23 Jul 2023 • Yiming Cui, Linjie Yang, Haichao Yu
Transformer-based detection and segmentation methods use a list of learned detection queries to retrieve information from the transformer network and learn to predict the location and category of one specific object from each query.
1 code implementation • 27 Jun 2023 • Zihang Xu, Ziqing Yang, Yiming Cui, Shijin Wang
IDOL achieves state-of-the-art performance on ReClor and LogiQA, the two most representative benchmarks in logical reasoning MRC, and is proven to be capable of generalizing to different pre-trained models and other types of MRC benchmarks like RACE and SQuAD 2. 0 while keeping competitive general language understanding ability through testing on tasks in GLUE.
Ranked #1 on Reading Comprehension on ReClor
1 code implementation • 13 May 2023 • Yiming Cui, Lecheng Ruan, Hang-Cheng Dong, Qiang Li, Zhongming Wu, Tieyong Zeng, Feng-Lei Fan
We prove a theorem to explain why Cloud-RAIN can enjoy reflection symmetry.
6 code implementations • 17 Apr 2023 • Yiming Cui, Ziqing Yang, Xin Yao
While several large language models, such as LLaMA, have been open-sourced by the community, these predominantly focus on English corpora, limiting their usefulness for other languages.
1 code implementation • 3 Apr 2023 • Xin Yao, Ziqing Yang, Yiming Cui, Shijin Wang
In natural language processing, pre-trained language models have become essential infrastructures.
no code implementations • 24 Mar 2023 • Yiming Cui, Jiajia Guo, Chao-Kai Wen, Shi Jin
Additionally, since the heterogeneity of CSI datasets in different UEs can degrade the performance of the FEEL-based framework, we introduce a personalization strategy to improve feedback performance.
1 code implementation • 15 Mar 2023 • Yiming Cui, Linjie Yang
With Transformerbased object detectors getting a better performance on the image domain tasks, recent works began to extend those methods to video object detection.
1 code implementation • 11 Mar 2023 • Feng-Lei Fan, Hang-Cheng Dong, Zhongming Wu, Lecheng Ruan, Tieyong Zeng, Yiming Cui, Jing-Xiao Liao
In this paper, with theoretical and empirical studies, we show that quadratic networks enjoy parametric efficiency, thereby confirming that the superior performance of quadratic networks is due to the intrinsic expressive capability.
1 code implementation • CVPR 2023 • Yiming Cui
With Transformer-based object detectors getting a better performance on the image domain tasks, recent works began to extend those methods to video object detection.
1 code implementation • 15 Dec 2022 • Ziqing Yang, Yiming Cui, Xin Yao, Shijin Wang
In this work, we propose a structured pruning method GRAIN (Gradient-based Intra-attention pruning), which performs task-specific pruning with knowledge distillation and yields highly effective models.
1 code implementation • 10 Nov 2022 • Yiming Cui, Wanxiang Che, Shijin Wang, Ting Liu
We propose LERT, a pre-trained language model that is trained on three types of linguistic features along with the original MLM pre-training task, using a linguistically-informed pre-training (LIP) strategy.
Ranked #6 on Stock Market Prediction on Astock
no code implementations • 31 Oct 2022 • Yiming Cui, Jiajia Guo, Zheng Cao, Huaze Tang, Chao-Kai Wen, Shi Jin, Xin Wang, Xiaolin Hou
Firstly, an autoencoder KD-based method is introduced by training a student autoencoder to mimic the reconstructed CSI of a pretrained teacher autoencoder.
no code implementations • 2 Oct 2022 • Yiming Cui
Video object detection is a fundamental yet challenging task in computer vision.
no code implementations • 12 Jul 2022 • Yiming Cui, Linjie Yang, Ding Liu
Object detection is a basic computer vision task to loccalize and categorize objects in a given image.
1 code implementation • 22 May 2022 • Liqi Yan, Qifan Wang, Yiming Cui, Fuli Feng, Xiaojun Quan, Xiangyu Zhang, Dongfang Liu
Video captioning is a challenging task as it needs to accurately transform visual understanding into natural language description.
no code implementations • SemEval (NAACL) 2022 • Zheng Chu, Ziqing Yang, Yiming Cui, Zhigang Chen, Ming Liu
The same multi-word expressions may have different meanings in different sentences.
1 code implementation • SemEval (NAACL) 2022 • Zihang Xu, Ziqing Yang, Yiming Cui, Zhigang Chen
This paper describes our system designed for SemEval-2022 Task 8: Multilingual News Article Similarity.
no code implementations • ACL 2022 • Ziqing Yang, Yiming Cui, Zhigang Chen
Pre-trained language models have been prevailed in natural language processing and become the backbones of many NLP tasks, but the demands for computational resources have limited their applications.
1 code implementation • 14 Mar 2022 • Yiming Cui, Ziqing Yang, Ting Liu
We permute a proportion of the input text, and the training objective is to predict the position of the original token.
Ranked #4 on Stock Market Prediction on Astock
no code implementations • COLING 2022 • Ziqing Yang, Zihang Xu, Yiming Cui, Baoxin Wang, Min Lin, Dayong Wu, Zhigang Chen
It covers Standard Chinese, Yue Chinese, and six other ethnic minority languages.
no code implementations • 28 Feb 2022 • Ziqing Yang, Yiming Cui, Zhigang Chen, Shijin Wang
In this paper, we aim to improve the multilingual model's supervised and zero-shot performance simultaneously only with the resources from supervised languages.
no code implementations • 20 Jan 2022 • Weihuang Xu, Guohao Yu, Yiming Cui, Romain Gloaguen, Alina Zare, Jason Bonnette, Joel Reyes-Cabrera, Ashish Rajurkar, Diane Rowland, Roser Matamala, Julie D. Jastrow, Thomas E. Juenger, Felix B. Fritschi
By introducing this dataset, we aim to facilitate the automatic segmentation of roots and the research of RSA with deep learning and other image analysis algorithms.
no code implementations • 15 Oct 2021 • Yiming Cui, Zhiwen Cao, Yixin Xie, Xingyu Jiang, Feng Tao, Yingjie Chen, Lin Li, Dongfang Liu
The existing MOTS studies face two critical challenges: 1) the published datasets inadequately capture the real-world complexity for network training to address various driving settings; 2) the working pipeline annotation tool is under-studied in the literature to improve the quality of MOTS learning examples.
1 code implementation • 26 Aug 2021 • Yiming Cui, Wei-Nan Zhang, Wanxiang Che, Ting Liu, Zhigang Chen, Shijin Wang
Achieving human-level performance on some of the Machine Reading Comprehension (MRC) datasets is no longer challenging with the help of powerful Pre-trained Language Models (PLMs).
1 code implementation • ICCV 2021 • Yiming Cui, Liqi Yan, Zhiwen Cao, Dongfang Liu
One of the popular solutions is to exploit the temporal information and enhance per-frame representation through aggregating features from neighboring frames.
no code implementations • Joint Conference on Lexical and Computational Semantics 2021 • Ziqing Yang, Yiming Cui, Chenglei Si, Wanxiang Che, Ting Liu, Shijin Wang, Guoping Hu
Adversarial training (AT) as a regularization method has proved its effectiveness on various tasks.
1 code implementation • EMNLP (MRQA) 2021 • Ziqing Yang, Wentao Ma, Yiming Cui, Jiani Ye, Wanxiang Che, Shijin Wang
Multilingual pre-trained models have achieved remarkable performance on cross-lingual transfer learning.
1 code implementation • 10 May 2021 • Yiming Cui, Ting Liu, Wanxiang Che, Zhigang Chen, Shijin Wang
Achieving human-level performance on some of Machine Reading Comprehension (MRC) datasets is no longer challenging with the help of powerful Pre-trained Language Models (PLMs).
Ranked #1 on Multi-Choice MRC on ExpMRC - RACE+ (test)
1 code implementation • CVPR 2021 • Dongfang Liu, Yiming Cui, Wenbo Tan, Yingjie Chen
Video instance segmentation (VIS) is a new and critical task in computer vision.
1 code implementation • 18 Feb 2021 • Liqi Yan, Yiming Cui, Yingjie Chen, Dongfang Liu
We extract the hierarchical feature maps from a convolutional neural network (CNN) and organically fuse the extracted features for image representations.
no code implementations • 7 Feb 2021 • Nan Shao, Yiming Cui, Ting Liu, Shijin Wang, Guoping Hu
To deal with this challenge, most of the existing works consider paragraphs as nodes in a graph and propose graph-based methods to retrieve them.
1 code implementation • 4 Dec 2020 • Dongfang Liu, Yiming Cui, Liqi Yan, Christos Mousas, Baijian Yang, Yingjie Chen
In this work, we introduce a Denser Feature Network (DenserNet) for visual localization.
no code implementations • 13 Nov 2020 • Yiming Cui, Ting Liu, Shijin Wang, Guoping Hu
With the blooming of various Pre-trained Language Models (PLMs), Machine Reading Comprehension (MRC) has embraced significant improvements on various benchmarks and even surpass human performances.
1 code implementation • COLING 2020 • Wentao Ma, Yiming Cui, Chenglei Si, Ting Liu, Shijin Wang, Guoping Hu
Most pre-trained language models (PLMs) construct word representations at subword level with Byte-Pair Encoding (BPE) or its variations, by which OOV (out-of-vocab) words are almost avoidable.
no code implementations • 13 Aug 2020 • Dongfang Liu, Yiming Cui, Xiaolei Guo, Wei Ding, Baijian Yang, Yingjie Chen
It is a common practice for vehicles to use GPS to acquire location information.
6 code implementations • Findings of the Association for Computational Linguistics 2020 • Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu
Bidirectional Encoder Representations from Transformers (BERT) has shown marvelous improvements across various NLP tasks, and consecutive variants have been proposed to further improve the performance of the pre-trained language models.
Ranked #13 on Stock Market Prediction on Astock
1 code implementation • Findings (ACL) 2021 • Chenglei Si, Ziqing Yang, Yiming Cui, Wentao Ma, Ting Liu, Shijin Wang
To fill this important gap, we construct AdvRACE (Adversarial RACE), a new model-agnostic benchmark for evaluating the robustness of MRC models under four different types of adversarial attacks, including our novel distractor extraction and generation attacks.
1 code implementation • ACL 2020 • Wentao Ma, Yiming Cui, Ting Liu, Dong Wang, Shijin Wang, Guoping Hu
Human conversations contain many types of information, e. g., knowledge, common sense, and language habits.
1 code implementation • EMNLP 2020 • Sanyuan Chen, Yutai Hou, Yiming Cui, Wanxiang Che, Ting Liu, Xiangzhan Yu
Deep pretrained language models have achieved great success in the way of pretraining first and then fine-tuning.
3 code implementations • COLING 2020 • Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, Yin Tian, Qianqian Dong, Weitang Liu, Bo Shi, Yiming Cui, Junyi Li, Jun Zeng, Rongzhao Wang, Weijian Xie, Yanting Li, Yina Patterson, Zuoyu Tian, Yiwen Zhang, He Zhou, Shaoweihua Liu, Zhe Zhao, Qipeng Zhao, Cong Yue, Xinrui Zhang, Zhengliang Yang, Kyle Richardson, Zhenzhong Lan
The advent of natural language understanding (NLU) benchmarks for English, such as GLUE and SuperGLUE allows new NLU models to be evaluated across a diverse set of tasks.
1 code implementation • COLING 2020 • Yiming Cui, Ting Liu, Ziqing Yang, Zhipeng Chen, Wentao Ma, Wanxiang Che, Shijin Wang, Guoping Hu
To add diversity in this area, in this paper, we propose a new task called Sentence Cloze-style Machine Reading Comprehension (SC-MRC).
no code implementations • EMNLP 2020 • Nan Shao, Yiming Cui, Ting Liu, Shijin Wang, Guoping Hu
We construct a strong baseline model to establish that, with the proper use of pre-trained models, graph structure may not be necessary for multi-hop question answering.
1 code implementation • ACL 2020 • Ziqing Yang, Yiming Cui, Zhipeng Chen, Wanxiang Che, Ting Liu, Shijin Wang, Guoping Hu
In this paper, we introduce TextBrewer, an open-source knowledge distillation toolkit designed for natural language processing.
no code implementations • 19 Dec 2019 • Yiming Cui, Wanxiang Che, Wei-Nan Zhang, Ting Liu, Shijin Wang, Guoping Hu
Story Ending Prediction is a task that needs to select an appropriate ending for the given story, which requires the machine to understand the story and sometimes needs commonsense knowledge.
no code implementations • 19 Dec 2019 • Xingyi Duan, Baoxin Wang, Ziyue Wang, Wentao Ma, Yiming Cui, Dayong Wu, Shijin Wang, Ting Liu, Tianxiang Huo, Zhen Hu, Heng Wang, Zhiyuan Liu
We present a Chinese judicial reading comprehension (CJRC) dataset which contains approximately 10K documents and almost 50K questions with answers.
no code implementations • 14 Nov 2019 • Yiming Cui, Wei-Nan Zhang, Wanxiang Che, Ting Liu, Zhipeng Chen, Shijin Wang, Guoping Hu
Recurrent Neural Networks (RNN) are known as powerful models for handling sequential data, and especially widely utilized in various natural language processing tasks.
no code implementations • 9 Nov 2019 • Ziqing Yang, Yiming Cui, Wanxiang Che, Ting Liu, Shijin Wang, Guoping Hu
With virtual adversarial training (VAT), we explore the possibility of improving the RC models with semi-supervised learning and prove that examples from a different task are also beneficial.
no code implementations • CONLL 2019 • Wentao Ma, Yiming Cui, Nan Shao, Su He, Wei-Nan Zhang, Ting Liu, Shijin Wang, Guoping Hu
The heart of TripleNet is a novel attention mechanism named triple attention to model the relationships within the triple at four levels.
1 code implementation • IJCNLP 2019 • Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu
In this paper, we propose Cross-Lingual Machine Reading Comprehension (CLMRC) task for the languages other than English.
2 code implementations • 19 Jun 2019 • Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang
To demonstrate the effectiveness of these models, we create a series of Chinese pre-trained language models as our baselines, including BERT, RoBERTa, ELECTRA, RBT, etc.
1 code implementation • 29 May 2019 • Haoyu Song, Wei-Nan Zhang, Yiming Cui, Dong Wang, Ting Liu
Giving conversational context with persona information to a chatbot, how to exploit the information to generate diverse and sustainable conversations is still a non-trivial task.
no code implementations • 21 Nov 2018 • Zhipeng Chen, Yiming Cui, Wentao Ma, Shijin Wang, Guoping Hu
Machine Reading Comprehension (MRC) with multiple-choice questions requires the machine to read given passage and select the correct answer among several candidates.
1 code implementation • IJCNLP 2019 • Yiming Cui, Ting Liu, Wanxiang Che, Li Xiao, Zhipeng Chen, Wentao Ma, Shijin Wang, Guoping Hu
Machine Reading Comprehension (MRC) has become enormously popular recently and has attracted a lot of attention.
no code implementations • COLING 2018 • Wei-Nan Zhang, Yiming Cui, Yifa Wang, Qingfu Zhu, Lingzhi Li, Lianqiang Zhou, Ting Liu
Despite the success of existing works on single-turn conversation generation, taking the coherence in consideration, human conversing is actually a context-sensitive process.
no code implementations • 15 Mar 2018 • Zhipeng Chen, Yiming Cui, Wentao Ma, Shijin Wang, Ting Liu, Guoping Hu
This paper describes the system which got the state-of-the-art results at SemEval-2018 Task 11: Machine Comprehension using Commonsense Knowledge.
1 code implementation • LREC 2018 • Yiming Cui, Ting Liu, Zhipeng Chen, Wentao Ma, Shijin Wang, Guoping Hu
Machine Reading Comprehension (MRC) has become enormously popular recently and has attracted a lot of attention.
2 code implementations • ACL 2017 • Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, Guoping Hu
Cloze-style queries are representative problems in reading comprehension.
Ranked #3 on Question Answering on Children's Book Test
no code implementations • COLING 2016 • Yiming Cui, Ting Liu, Zhipeng Chen, Shijin Wang, Guoping Hu
Reading comprehension has embraced a booming in recent NLP research.
no code implementations • ACL 2017 • Ting Liu, Yiming Cui, Qingyu Yin, Wei-Nan Zhang, Shijin Wang, Guoping Hu
Most existing approaches for zero pronoun resolution are heavily relying on annotated data, which is often released by shared task organizers.
no code implementations • NAACL 2016 • Yiming Cui, Shijin Wang, Jianfeng Li
Artificial neural networks are powerful models, which have been widely applied into many aspects of machine translation, such as language modeling and translation modeling.
no code implementations • 1 Dec 2015 • Yiming Cui, Conghui Zhu, Xiaoning Zhu, Tiejun Zhao
Pivot language is employed as a way to solve the data sparseness problem in machine translation, especially when the data for a particular language pair does not exist.