2 code implementations • Findings (ACL) 2022 • Sen yang, Leyang Cui, Ruoxi Ning, Di wu, Yue Zhang
Neural constituency parsers have reached practical performance on news-domain benchmarks.
no code implementations • Findings (NAACL) 2022 • Yue Zhang, Hongliang Fei, Dingcheng Li, Ping Li
Recently, prompt learning has received significant attention, where the downstream tasks are reformulated to the mask-filling task with the help of a textual prompt.
no code implementations • EMNLP 2020 • Chenhua Chen, Zhiyang Teng, Yue Zhang
Aspect-level sentiment analysis aims to recognize the sentiment polarity of an aspect or a target in a comment.
no code implementations • EMNLP 2020 • Chen Jia, Yuefeng Shi, Qinrong Yang, Yue Zhang
We then integrate the entity information into BERT using Char-Entity-Transformer, which augments the self-attention using a combination of character and entity representations.
no code implementations • EMNLP 2021 • Sixuan Wu, Jian Li, Peng Zhang, Yue Zhang
Recent research has investigated quantum NLP, designing algorithms that process natural language in quantum computers, and also quantum-inspired algorithms that improve NLP performance on classical computers.
1 code implementation • COLING 2022 • Kaixin Wu, Yue Zhang, Bojie Hu, Tong Zhang
Extensive experiments on ten WMT machine translation tasks show that the proposed model yields an average of 1. 35x faster (with almost no decrease in BLEU) over the state-of-the-art inference implementation.
no code implementations • EMNLP (sustainlp) 2021 • Yue Zhang, ChengCheng Hu, Yuqi Liu, Hui Fang, Jimmy Lin
It is well known that rerankers built on pretrained transformer models such as BERT have dramatically improved retrieval effectiveness in many tasks.
no code implementations • COLING (CogALex) 2020 • Lu Cao, Yulong Chen, Dandan Huang, Yue Zhang
Functional Magnetic Resonance Imaging (fMRI) provides a means to investigate human conceptual representation in cognitive and neuroscience studies, where researchers predict the fMRI activations with elicited stimuli inputs.
no code implementations • CCL 2020 • Meishan Zhang, Yue Zhang
Recent advances of multilingual word representations weaken the input divergences across languages, making cross-lingual transfer similar to the monolingual cross-domain and semi-supervised settings.
no code implementations • CCL 2020 • Shuailong Liang, Derek F. Wong, Yue Zhang
我们基于从2020年1月22日至2020年4月30日在推特社交平台上抓取的不同国家和地区发布的50万条推文, 研究了有关 2019新型冠状病毒肺炎相关的主题和人们的观点, 发现了不同国家之间推特用户的普遍关切和看法之间存在着异同, 并且对不同议题的情感态度也有所不同。我们发现大部分推文中包含了强烈的情感, 其中表达爱与支持的推文比较普遍。总体来看, 人们的情感随着时间的推移逐渐正向增强。
1 code implementation • ACL 2022 • Chenhua Chen, Zhiyang Teng, Zhongqing Wang, Yue Zhang
Dependency trees have been intensively used with graph neural networks for aspect-based sentiment classification.
Aspect-Based Sentiment Analysis (ABSA)
Sentiment Classification
1 code implementation • ACL 2022 • Yue Zhang, Parisa Kordjamshidi
In this paper, we investigate the problem of vision and language navigation.
no code implementations • NAACL (ACL) 2022 • Rui Zhang, Yangfeng Ji, Yue Zhang, Rebecca J. Passonneau
We then survey the benefits and the best practices of contrastive learning for various downstream NLP applications including Text Classification, Question Answering, Summarization, Text Generation, Interpretability and Explainability, Commonsense Knowledge and Reasoning, Vision-and-Language. This tutorial intends to help researchers in the NLP and computational linguistics community to understand this emerging topic and promote future research directions of using contrastive learning for NLP applications.
no code implementations • INLG (ACL) 2021 • Yulong Chen, Yang Liu, Yue Zhang
We propose a shared task on summarizing real-life scenario dialogues, DialogSum Challenge, to encourage researchers to address challenges in dialogue summarization, which has been less studied by the summarization community.
1 code implementation • Findings (ACL) 2022 • Yafu Li, Yongjing Yin, Jing Li, Yue Zhang
Neural machine translation (NMT) has obtained significant performance improvement over the recent years.
1 code implementation • 3 Sep 2023 • Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge.
no code implementations • 25 Aug 2023 • Yanjie Song, Yutong Wu, Yangyang Guo, Ran Yan, P. N. Suganthan, Yue Zhang, Witold Pedrycz, Yingwu Chen, Swagatam Das, Rammohan Mallipeddi, Oladayo Solomon Ajani
This paper presents a comprehensive survey on integrating reinforcement learning into the evolutionary algorithm, referred to as reinforcement learning-assisted evolutionary algorithm (RL-EA).
1 code implementation • 17 Aug 2023 • Yun Luo, Zhen Yang, Fandong Meng, Yafu Li, Jie zhou, Yue Zhang
Moreover, we find that ALPACA can maintain more knowledge and capacity compared with LLAMA during the continual fine-tuning, which implies that general instruction tuning can help mitigate the forgetting phenomenon of LLMs in the further fine-tuning process.
no code implementations • 31 Jul 2023 • Yue Zhang, Hehe Fan, Yi Yang, Mohan Kankanhalli
The proposed method, named Mixture of Depth and Point cloud video experts (DPMix), achieved the first place in the 4D Action Segmentation Track of the HOI4D Challenge 2023.
no code implementations • 22 Jul 2023 • Fu Lin, Haonan Gong, Mingkang Li, Zitong Wang, Yue Zhang, Xuexiong Luo
The previous works have observed that abnormal graphs mainly show node-level and graph-level anomalies, but these methods equally treat two anomaly forms above in the evaluation of abnormal graphs, which is contrary to the fact that different types of abnormal graph data have different degrees in terms of node-level and graph-level anomalies.
1 code implementation • 18 Jul 2023 • Dayu Yang, Yue Zhang, Hui Fang
Nevertheless, existing zero-shot methods face three primary limitations: they are not universally applicable to all retrievers, their effectiveness lacks sufficient explainability, and they struggle to resolve common conversational ambiguities caused by omission.
no code implementations • 17 Jul 2023 • Dayu Yang, Yue Zhang, Hui Fang
In this work, we aim to reproduce multi-stage retrieval pipelines and explore one of the potential benefits of involving mixed-initiative interaction in conversational passage retrieval scenarios: reformulating raw queries.
no code implementations • 14 Jul 2023 • Marc Demoustier, Yue Zhang, Venkatesh Narasimha Murthy, Florin C. Ghesu, Dorin Comaniciu
Tracking the catheter tip poses different challenges: the tip can be occluded by contrast during angiography or interventional devices; and it is always in continuous movement due to the cardiac and respiratory motions.
no code implementations • 14 Jul 2023 • Fan Ni, Xu Zhang, Jianhui Wu, Guan-Nan Dong, Aichun Zhu, Hui Liu, Yue Zhang
To the best of our knowledge, TVPRN is the first successful attempt to use video for text-based person retrieval task and has achieved state-of-the-art performance on TVPReid dataset.
1 code implementation • 8 Jul 2023 • Yulong Chen, Huajian Zhang, Yijie Zhou, Xuefeng Bai, Yueguan Wang, Ming Zhong, Jianhao Yan, Yafu Li, Judy Li, Michael Zhu, Yue Zhang
Additionally, based on the same intuition, we propose a 2-Step method, which takes both conversation and summary as input to simulate human annotation process.
1 code implementation • 6 Jul 2023 • Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications.
no code implementations • 1 Jul 2023 • Jiong Cai, Yong Jiang, Yue Zhang, Chengyue Jiang, Ke Yu, Jianhui Ji, Rong Xiao, Haihong Tang, Tao Wang, Zhongqiang Huang, Pengjun Xie, Fei Huang, Kewei Tu
We also show that pretraining the QE module with auto-generated QE data from user logs can further improve the overall performance.
1 code implementation • 20 Jun 2023 • Yafu Li, Leyang Cui, Jianhao Yan, Yongjing Yin, Wei Bi, Shuming Shi, Yue Zhang
Most existing text generation models follow the sequence-to-sequence paradigm.
no code implementations • 19 Jun 2023 • Dongyu Ru, Lin Qiu, Xipeng Qiu, Yue Zhang, Zheng Zhang
Discourse analysis is an important task because it models intrinsic semantic structures between sentences in a document.
1 code implementation • 15 Jun 2023 • Xiaoyi Bao, Xiaotong Jiang, Zhongqing Wang, Yue Zhang, Guodong Zhou
To address these challenges, we propose an opinion tree parsing model, aiming to parse all the sentiment elements from an opinion tree, which is much faster, and can explicitly reveal a more comprehensive and complete aspect-level sentiment structure.
1 code implementation • 8 Jun 2023 • Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei Ye, Shikun Zhang, Yue Zhang
To ensure the reliability of PandaLM, we collect a diverse human-annotated test dataset, where all contexts are generated by humans and labels are aligned with human preferences.
1 code implementation • 7 Jun 2023 • Kaijie Zhu, Jindong Wang, Jiaheng Zhou, Zichen Wang, Hao Chen, Yidong Wang, Linyi Yang, Wei Ye, Neil Zhenqiang Gong, Yue Zhang, Xing Xie
The increasing reliance on Large Language Models (LLMs) across academia and industry necessitates a comprehensive understanding of their robustness to prompts.
1 code implementation • 30 May 2023 • Yuqing Yang, Qipeng Guo, Xiangkun Hu, Yue Zhang, Xipeng Qiu, Zheng Zhang
Motivated by the fact that all event structures can be inferred from AMR, this work reformulates EAE as a link prediction problem on AMR graphs.
1 code implementation • 26 May 2023 • Cunxiang Wang, Haofei Yu, Yue Zhang
Open-Domain Question Answering (ODQA) systems necessitate a reader model capable of generating answers by simultaneously referring to multiple passages.
no code implementations • 26 May 2023 • Cunxiang Wang, Zhikun Xu, Qipeng Guo, Xiangkun Hu, Xuefeng Bai, Zheng Zhang, Yue Zhang
The Open-Domain Question Answering (ODQA) task involves retrieving and subsequently generating answers from fine-grained relevant passages within a database.
1 code implementation • 25 May 2023 • Yue Zhang, Bo Zhang, Haochen Jiang, Zhenghua Li, Chen Li, Fei Huang, Min Zhang
We introduce NaSGEC, a new dataset to facilitate research on Chinese grammatical error correction (CGEC) for native speaker texts from multiple domains.
no code implementations • 23 May 2023 • Linyi Yang, Yaoxiao Song, Xuan Ren, Chenyang Lyu, Yidong Wang, Lingqiao Liu, Jindong Wang, Jennifer Foster, Yue Zhang
Machine learning (ML) systems in natural language processing (NLP) face significant challenges in generalizing to out-of-distribution (OOD) data, where the test distribution differs from the training data distribution.
no code implementations • 23 May 2023 • Naihao Deng, YiKai Liu, Mingye Chen, Winston Wu, Siyang Liu, Yulong Chen, Yue Zhang, Rada Mihalcea
Our results show that our system can meet the diverse needs of NLP researchers and significantly accelerate the annotation process.
1 code implementation • 22 May 2023 • Guangsheng Bao, Zhiyang Teng, Yue Zhang
Non-autoregressive translation (NAT) models have been extensively investigated within the context of sentence-level machine translation (MT) tasks, demonstrating comparable quality and superior translation speed when contrasted with autoregressive translation (AT) models.
no code implementations • 22 May 2023 • Yue Zhang, Leyang Cui, Deng Cai, Xinting Huang, Tao Fang, Wei Bi
ChatGPT and GPT-4 have attracted substantial interest from both academic and industrial circles, owing to their remarkable few-shot (or even zero-shot) ability to handle various tasks.
1 code implementation • 22 May 2023 • Yafu Li, Qintong Li, Leyang Cui, Wei Bi, Longyue Wang, Linyi Yang, Shuming Shi, Yue Zhang
In practical scenarios, the detector faces texts from various domains or LLMs without knowing their sources.
1 code implementation • 21 May 2023 • Cunxiang Wang, Sirui Cheng, Qipeng Guo, Zhikun Xu, Bowen Ding, Yidong Wang, Xiangkun Hu, Zheng Zhang, Yue Zhang
This study focuses on the evaluation of the Open Question Answering (Open-QA) task, which can directly estimate the factuality of large language models (LLMs).
1 code implementation • 20 May 2023 • Hanmeng Liu, Zhiyang Teng, Leyang Cui, Chaoli Zhang, Qiji Zhou, Yue Zhang
LogiCoT serves as an instruction set for teaching models of logical reasoning and elicits general reasoning skills.
no code implementations • 20 May 2023 • Yun Luo, Xiaotian Lin, Zhen Yang, Fandong Meng, Jie zhou, Yue Zhang
It is seldom considered to adapt the decision boundary for new representations and in this paper we propose a Supervised Contrastive learning framework with adaptive classification criterion for Continual Learning (SCCL), In our method, a contrastive loss is used to directly learn representations for different tasks and a limited number of data samples are saved as the classification criterion.
no code implementations • 19 May 2023 • Ya-Lin Zhang, Jun Zhou, Yankun Ren, Yue Zhang, Xinxing Yang, Meng Li, Qitao Shi, Longfei Li
In this paper, we consider the problem of long tail scenario modeling with budget limitation, i. e., insufficient human resources for model training stage and limited time and computing resources for model inference stage.
1 code implementation • 17 May 2023 • Hanxu Hu, Hongyuan Lu, Huajian Zhang, Wai Lam, Yue Zhang
To this end, we propose a novel method called CoS (Chain-of-Symbol Prompting) that represents the complex environments with condensed symbolic spatial representations during the chained intermediate thinking steps.
1 code implementation • 15 May 2023 • Linyi Yang, Yingpeng Ma, Yue Zhang
Using FinTrust, we show that the consistency of state-of-the-art NLP models for financial forecasting is poor.
1 code implementation • 14 May 2023 • Yingjie Niu, Linyi Yang, Ruihai Dong, Yue Zhang
Our method has been theoretically and empirically shown to be effective in enhancing the generalization ability of both generative and discriminative models.
1 code implementation • 13 May 2023 • Yu Zhang, Siqi Chen, Mingdao Wang, Xianlin Zhang, Chuang Zhu, Yue Zhang, Xueming Li
Extensive experiments demonstrate that our method outperforms other methods in maintaining temporal consistency both qualitatively and quantitatively.
1 code implementation • 12 May 2023 • Hongliang He, Junlei Zhang, Zhenzhong Lan, Yue Zhang
Contrastive learning-based methods, such as unsup-SimCSE, have achieved state-of-the-art (SOTA) performances in learning unsupervised sentence embeddings.
no code implementations • 10 May 2023 • Yun Luo, Zhen Yang, Xuefeng Bai, Fandong Meng, Jie zhou, Yue Zhang
Intuitively, the representation forgetting can influence the general knowledge stored in pre-trained language models (LMs), but the concrete effect is still unclear.
no code implementations • 8 May 2023 • Guangsheng Bao, Zhiyang Teng, Yue Zhang
Sequence-to-sequence (seq2seq) models have been widely used for natural language processing, computer vision, and other deep learning tasks.
1 code implementation • 8 May 2023 • Guangsheng Bao, Zhiyang Teng, Yue Zhang
Document-level machine translation faces the challenge of data sparsity due to its long input length and a small amount of training data, increasing the risk of learning spurious patterns.
no code implementations • 3 May 2023 • Zebin Ou, Yue Zhang
Robust loss functions are designed to combat the adverse impacts of label noise, whose robustness is typically supported by theoretical bounds agnostic to the training dynamics.
1 code implementation • 26 Apr 2023 • Haiqin Xie, Cheng Wang, Shicheng Li, Yue Zhang, Shanshan Wang
In the realm of urban transportation, metro systems serve as crucial and sustainable means of public transit.
no code implementations • 14 Apr 2023 • Jiahua Dong, Guohua Cheng, Yue Zhang, Chengtao Peng, Yu Song, Ruofeng Tong, Lanfen Lin, Yen-Wei Chen
Multi-organ segmentation, which identifies and separates different organs in medical images, is a fundamental task in medical image analysis.
no code implementations • 13 Apr 2023 • Siqi Chen, Xueming Li, Xianlin Zhang, Mingdao Wang, Yu Zhang, Yue Zhang
Previous methods search for correspondence across the entire reference image, and this type of global matching is easy to get mismatch.
no code implementations • 11 Apr 2023 • Yue Zhang, Chengtao Peng, Qiuli Wang, Dan Song, Kaiyan Li, S. Kevin Zhou
Besides, we propose a Dynamic Feature Unification Module to integrate information from a varying number of available modalities, which enables the network to be robust to random missing modalities.
no code implementations • 7 Apr 2023 • Guangsheng Bao, Zebin Ou, Yue Zhang
Human experts write summaries using different techniques, including rewriting a sentence in the document or fusing multiple sentences to generate a summary sentence.
1 code implementation • 7 Apr 2023 • Hanmeng Liu, Ruoxi Ning, Zhiyang Teng, Jian Liu, Qiji Zhou, Yue Zhang
With the release of Generative Pretrained Transformer 4 (GPT-4), highlighted as "advanced" at reasoning tasks, we are eager to learn the GPT-4 performance on various logical reasoning tasks.
no code implementations • 4 Apr 2023 • Tao Fang, Shu Yang, Kaixin Lan, Derek F. Wong, Jinpeng Hu, Lidia S. Chao, Yue Zhang
To showcase its capabilities in GEC, we design zero-shot chain-of-thought (CoT) and few-shot CoT settings using in-context learning for ChatGPT.
no code implementations • 27 Mar 2023 • Siqi Chen, Xueming Li, Xianlin Zhang, Mingdao Wang, Yu Zhang, Jiatong Han, Yue Zhang
Exemplar-based video colorization is an essential technique for applications like old movie restoration.
1 code implementation • 26 Mar 2023 • Yue Zhang, Suchen Wang, Shichao Kan, Zhenyu Weng, Yigang Cen, Yap-Peng Tan
Our key idea is to formulate the POAR problem as an image-text search problem.
no code implementations • 22 Mar 2023 • Fengji Zhang, Bei Chen, Yue Zhang, Jin Liu, Daoguang Zan, Yi Mao, Jian-Guang Lou, Weizhu Chen
It streamlines the repository-level code completion process by incorporating a similarity-based retriever and a pre-trained code language model, which allows for the effective utilization of repository-level information for code completion and grants the ability to generate code at various levels of granularity.
no code implementations • 15 Mar 2023 • Han Yang, Qiuli Wang, Yue Zhang, Zhulin An, Chen Liu, Xiaohong Zhang, S. Kevin Zhou
Radiologists possess diverse training and clinical experiences, leading to variations in the segmentation annotations of lung nodules and resulting in segmentation uncertainty. Conventional methods typically select a single annotation as the learning target or attempt to learn a latent space comprising multiple annotations.
1 code implementation • 22 Feb 2023 • Jindong Wang, Xixu Hu, Wenxin Hou, Hao Chen, Runkai Zheng, Yidong Wang, Linyi Yang, Haojun Huang, Wei Ye, Xiubo Geng, Binxin Jiao, Yue Zhang, Xing Xie
In this paper, we conduct a thorough evaluation of the robustness of ChatGPT from the adversarial and out-of-distribution (OOD) perspective.
1 code implementation • 18 Feb 2023 • Yue Zhang, Parisa Kordjamshidi
The mentioned landmarks are not recognizable by the navigation agent due to the different vision abilities of the instructor and the modeled agent.
1 code implementation • 16 Feb 2023 • Hossein Rajaby Faghihi, Aliakbar Nafar, Chen Zheng, Roshanak Mirzaee, Yue Zhang, Andrzej Uszok, Alexander Wan, Tanawan Premsri, Dan Roth, Parisa Kordjamshidi
Recent research has shown that integrating domain knowledge into deep learning architectures is effective -- it helps reduce the amount of required data, improves the accuracy of the models' decisions, and improves the interpretability of models.
1 code implementation • 8 Feb 2023 • Yun Luo, Zihan Liu, Stan Z. Li, Yue Zhang
(Dis)agreement detection aims to identify the authors' attitudes or positions (\textit{{agree, disagree, neutral}}) towards a specific text.
no code implementations • 3 Feb 2023 • Hongmin Cai, Fei Qi, Junyu Li, Yu Hu, Yue Zhang, Yiu-ming Cheung, Bin Hu
Conventional clustering methods based on pairwise affinity usually suffer from the concentration effect while processing huge dimensional features yet low sample sizes data, resulting in inaccuracy to encode the sample proximity and suboptimal performance in clustering.
no code implementations • 27 Jan 2023 • Yaoxian Song, Penglei Sun, Yi Ren, Yu Zheng, Yue Zhang
To evaluate the effectiveness, we perform multi-level difficulty part language grounding grasping experiments and deploy our proposed model on a real robot.
no code implementations • 17 Dec 2022 • Chenyang Lyu, Linyi Yang, Yue Zhang, Yvette Graham, Jennifer Foster
User and product information associated with a review is useful for sentiment polarity prediction.
no code implementations • 8 Dec 2022 • Jianhao Yan, Jin Xu, Fandong Meng, Jie zhou, Yue Zhang
In this work, we show that the issue arises from the un-consistency of label smoothing on the token-level and sequence-level distributions.
1 code implementation • The 33rd British Machine Vision Conference 2022 • Ji Huang, Chao Liang, Yue Zhang, Zhongyuan Wang, Chunjie Zhang
Existing RA work can be generally divided into unsupervised methods and fully-supervised methods.
1 code implementation • 17 Nov 2022 • Yulong Chen, Yang Liu, Ruochen Xu, ZiYi Yang, Chenguang Zhu, Michael Zeng, Yue Zhang
The high annotation costs and diverse demands of various summarization tasks motivate the development of few-shot summarization.
1 code implementation • 15 Nov 2022 • Linyi Yang, Shuibai Zhang, Libo Qin, Yafu Li, Yidong Wang, Hanmeng Liu, Jindong Wang, Xing Xie, Yue Zhang
Pre-trained language models (PLMs) are known to improve the generalization performance of natural language understanding models by leveraging large amounts of data during the pre-training phase.
Natural Language Understanding
Out-of-Distribution Generalization
no code implementations • 15 Nov 2022 • Yue Zhang, Zhenghua Li
Recently, Zhang et al. (2022) propose a syntax-aware grammatical error correction (GEC) approach, named SynGEC, showing that incorporating tailored dependency-based syntax of the input sentence is quite beneficial to GEC.
1 code implementation • 31 Oct 2022 • Tengxiao Liu, Qipeng Guo, Xiangkun Hu, Yue Zhang, Xipeng Qiu, Zheng Zhang
RLET iteratively performs single step reasoning with sentence selection and deduction generation modules, from which the training signal is accumulated across the tree with elaborately designed aligned reward function that is consistent with the evaluation.
1 code implementation • ACL 2022 • Jiangbin Zheng, Yile Wang, Ge Wang, Jun Xia, Yufei Huang, Guojiang Zhao, Yue Zhang, Stan Z. Li
Although contextualized embeddings generated from large-scale pre-trained models perform well in many tasks, traditional static embeddings (e. g., Skip-gram, Word2Vec) still play an important role in low-resource and lightweight settings due to their low computational cost, ease of deployment, and stability.
1 code implementation • 22 Oct 2022 • Xuefeng Bai, Seng Yang, Leyang Cui, Linfeng Song, Yue Zhang
Based on our observation, we investigate two approaches to reduce the domain distribution divergence of text and AMR features, respectively.
1 code implementation • 22 Oct 2022 • Yue Zhang, Bo Zhang, Zhenghua Li, Zuyi Bao, Chen Li, Min Zhang
Then, we obtain parse trees of the source incorrect sentences by projecting trees of the target correct sentences.
1 code implementation • 20 Oct 2022 • Yafu Li, Leyang Cui, Yongjing Yin, Yue Zhang
Despite low latency, non-autoregressive machine translation (NAT) suffers severe performance deterioration due to the naive independence assumption.
1 code implementation • 20 Oct 2022 • Marcos V. Conde, Radu Timofte, Yibin Huang, Jingyang Peng, Chang Chen, Cheng Li, Eduardo Pérez-Pellitero, Fenglong Song, Furui Bai, Shuai Liu, Chaoyu Feng, Xiaotao Wang, Lei Lei, Yu Zhu, Chenghua Li, Yingying Jiang, Yong A, Peisong Wang, Cong Leng, Jian Cheng, Xiaoyu Liu, Zhicun Yin, Zhilu Zhang, Junyi Li, Ming Liu, WangMeng Zuo, Jun Jiang, Jinha Kim, Yue Zhang, Beiji Zou, Zhikai Zong, Xiaoxiao Liu, Juan Marín Vega, Michael Sloth, Peter Schneider-Kamp, Richard Röttger, Furkan Kınlı, Barış Özcan, Furkan Kıraç, Li Leyi, SM Nadim Uddin, Dipon Kumar Ghosh, Yong Ju Jung
Cameras capture sensor RAW images and transform them into pleasant RGB images, suitable for the human eyes, using their integrated Image Signal Processor (ISP).
no code implementations • 19 Oct 2022 • Yue Zhang, Hongliang Fei, Dingcheng Li, Tan Yu, Ping Li
In particular, we focus on few-shot image recognition tasks on pretrained vision-language models (PVLMs) and develop a method of prompting through prototype (PTP), where we define $K$ image prototypes and $K$ prompt prototypes.
no code implementations • 18 Oct 2022 • Yue Zhang, Hongliang Fei, Ping Li
Specifically, we build a noise model to estimate the unknown labeling noise distribution over input contexts and noisy type labels.
no code implementations • COLING 2022 • Yongjing Yin, Yafu Li, Fandong Meng, Jie zhou, Yue Zhang
Modern neural machine translation (NMT) models have achieved competitive performance in standard benchmarks.
1 code implementation • COLING 2022 • Yue Zhang, Parisa Kordjamshidi
Understanding spatial and visual information is essential for a navigation agent who follows natural language instructions.
1 code implementation • COLING 2022 • Xuefeng Bai, Linfeng Song, Yue Zhang
However, these models are typically trained on surface dialogue text, thus are proven to be weak in understanding the main semantic meaning of a dialogue context.
no code implementations • 15 Sep 2022 • Ziqi Zhang, Yile Wang, Yue Zhang, Donglin Wang
Experimental results show that our RL pre-trained models can give close performance compared with the models using the LM training objective, showing that there exist common useful features across these two modalities.
1 code implementation • 8 Sep 2022 • Yile Wang, Linyi Yang, Zhiyang Teng, Ming Zhou, Yue Zhang
Transformer-based pre-trained models have gained much advance in recent years, becoming one of the most important backbones in natural language processing.
1 code implementation • COLING 2022 • Linyi Yang, Lifan Yuan, Leyang Cui, Wenyang Gao, Yue Zhang
Few-shot Named Entity Recognition (NER) is imperative for entity tagging in limited resource domains and thus received proper attention in recent years.
1 code implementation • COLING 2022 • Naihao Deng, Yulong Chen, Yue Zhang
Text-to-SQL has attracted attention from both the natural language processing and database communities because of its ability to convert the semantics in natural language into SQL queries and its practical application in building natural language interfaces to database systems.
no code implementations • 20 Aug 2022 • Yile Wang, Yue Zhang
We quantify how much the contextualized embeddings of each word sense vary across contexts in typical pre-trained models.
1 code implementation • COLING 2022 • Yun Luo, Fang Guo, Zihan Liu, Yue Zhang
Cross-domain sentiment analysis aims to predict the sentiment of texts in the target domain using the model trained on the source domain to cope with the scarcity of labeled data.
no code implementations • 18 Aug 2022 • Pai Liu, Wenyang Gao, Wenjie Dong, Songfang Huang, Yue Zhang
Open information extraction is an important NLP task that targets extracting structured information from unstructured text without limitations on the relation type or the domain of the text.
1 code implementation • COLING 2022 • Yun Luo, Zihan Liu, Yuefeng Shi, Stan Z Li, Yue Zhang
Meanwhile, ablation studies prove the significance of each module in our model.
1 code implementation • COLING 2022 • Yidong Wang, Hao Wu, Ao Liu, Wenxin Hou, Zhen Wu, Jindong Wang, Takahiro Shinozaki, Manabu Okumura, Yue Zhang
Limited labeled data increase the risk of distribution shift between test data and training data.
2 code implementations • 12 Aug 2022 • Yidong Wang, Hao Chen, Yue Fan, Wang Sun, Ran Tao, Wenxin Hou, RenJie Wang, Linyi Yang, Zhi Zhou, Lan-Zhe Guo, Heli Qi, Zhen Wu, Yu-Feng Li, Satoshi Nakamura, Wei Ye, Marios Savvides, Bhiksha Raj, Takahiro Shinozaki, Bernt Schiele, Jindong Wang, Xing Xie, Yue Zhang
We further provide the pre-trained versions of the state-of-the-art neural models for CV tasks to make the cost affordable for further tuning.
no code implementations • 10 Aug 2022 • Yingzi Fan, Longfei Han, Yue Zhang, Lechao Cheng, Chen Xia, Di Hu
The domain discrepancy induces to performance degradation on target testing data for CNN models.
1 code implementation • 8 Aug 2022 • Yulong Chen, Naihao Deng, Yang Liu, Yue Zhang
We report the results of DialogSum Challenge, the shared task on summarizing real-life scenario dialogues at INLG 2022.
no code implementations • 26 Jul 2022 • Yue Zhang, Yajie Zou, Yuanchang Xie, Lei Chen
A quantitative understanding of dynamic lane-changing (LC) interaction patterns is indispensable for improving the decision-making of autonomous vehicles, especially in mixed traffic with human-driven vehicles.
1 code implementation • 13 Jul 2022 • Guangsheng Bao, Yue Zhang
The rewriting method for text summarization combines extractive and abstractive approaches, improving the conciseness and readability of extractive summaries using an abstractive model.
2 code implementations • 23 Jun 2022 • Yue Zhang, Haochen Jiang, Zuyi Bao, Bo Zhang, Chen Li, Zhenghua Li
We have accumulated 1, 119 error templates for Chinese GEC based on this method.
no code implementations • Findings (ACL) 2022 • Li Du, Xiao Ding, Yue Zhang, Kai Xiong, Ting Liu, Bing Qin
To this end, we incorporate an additional structured variable into BERT to learn to predict the event connections in the training process.
2 code implementations • 1 May 2022 • Yulong Chen, Ming Zhong, Xuefeng Bai, Naihao Deng, Jing Li, Xianchao Zhu, Yue Zhang
We propose the shared task of cross-lingual conversation summarization, \emph{ConvSumX Challenge}, opening new avenues for researchers to investigate solutions that integrate conversation summarization and machine translation.
Abstractive Dialogue Summarization
Cross-Lingual Abstractive Summarization
+3
1 code implementation • 23 Apr 2022 • Yizhe Xu, Tom H. Greene, Adam P. Bress, Brandon K. Bellows, Yue Zhang, Zugui Zhang, Paul Kolm, William S. Weintraub, Andrew S. Moran, Jincheng Shen
Evidence from observational studies has become increasingly important for supporting healthcare policy making via cost-effectiveness (CE) analyses.
1 code implementation • NAACL 2022 • Yue Zhang, Zhenghua Li, Zuyi Bao, Jiacheng Li, Bo Zhang, Chen Li, Fei Huang, Min Zhang
This paper presents MuCGEC, a multi-reference multi-source evaluation dataset for Chinese Grammatical Error Correction (CGEC), consisting of 7, 063 sentences collected from three Chinese-as-a-Second-Language (CSL) learner sources.
no code implementations • 17 Apr 2022 • Cunxiang Wang, Fuli Luo, Yanyang Li, Runxin Xu, Fei Huang, Yue Zhang
Pre-trained language models (PLMs) like BERT have made significant progress in various downstream NLP tasks.
1 code implementation • COLING 2022 • Zebin Ou, Meishan Zhang, Yue Zhang
Word ordering is a constrained language generation task taking unordered words as input.
1 code implementation • 15 Apr 2022 • Linyi Yang, Zhen Wang, Yuxiang Wu, Jie Yang, Yue Zhang
Understanding causality is key to the success of NLP applications, especially in high-stakes domains.
no code implementations • 14 Apr 2022 • Yun Luo, Hongjie Cai, Linyi Yang, Yanxia Qin, Rui Xia, Yue Zhang
Since previous studies on open-domain targeted sentiment analysis are limited in dataset domain variety and sentence level, we propose a novel dataset consisting of 6, 013 human-labeled data to extend the data domains in topics of interest and document level.
1 code implementation • ACL 2022 • Jinghui Lu, Linyi Yang, Brian Mac Namee, Yue Zhang
We present a novel rationale-centric framework with human-in-the-loop -- Rationales-centric Double-robustness Learning (RDL) -- to boost model out-of-distribution performance in few-shot learning scenarios.
1 code implementation • 16 Mar 2022 • ZiFan Chen, Jie Zhao, Hao Yu, Yue Zhang, Li Zhang
Accurate and efficient lumbar spine disease identification is crucial for clinical diagnosis.
2 code implementations • ACL 2022 • Xuefeng Bai, Yulong Chen, Yue Zhang
To our knowledge, we are the first to consider pre-training on semantic graphs.
Ranked #1 on
AMR-to-Text Generation
on Bio
(BLEU metric, using extra
training data)
no code implementations • 7 Mar 2022 • Leyang Cui, Fandong Meng, Yijin Liu, Jie zhou, Yue Zhang
Although pre-trained sequence-to-sequence models have achieved great success in dialogue response generation, chatbots still suffer from generating inconsistent responses in real-world practice, especially in multi-turn settings.
no code implementations • 2 Mar 2022 • Sen yang, Yunchen Zhang, Leyang Cui, Yue Zhang
Thanks to the advanced improvement of large pre-trained language models, prompt-based fine-tuning is shown to be effective on a variety of downstream tasks.
no code implementations • 10 Feb 2022 • Yulong Chen, Yang Liu, Li Dong, Shuohang Wang, Chenguang Zhu, Michael Zeng, Yue Zhang
However, for prompt learning, there are still two salient gaps between NLP tasks and pretraining.
no code implementations • 9 Feb 2022 • Jian Zhao, Yue Zhang, Xunhan Hu, Weixun Wang, Wengang Zhou, Jianye Hao, Jiangcheng Zhu, Houqiang Li
In cooperative multi-agent systems, agents jointly take actions and receive a team reward instead of individual rewards.
no code implementations • 5 Jan 2022 • Linyi Yang, Jiazheng Li, Ruihai Dong, Yue Zhang, Barry Smyth
Financial forecasting has been an important and active area of machine learning research because of the challenges it presents and the potential rewards that even minor improvements in prediction accuracy or forecasting may entail.
2 code implementations • 6 Dec 2021 • Kaustubh D. Dhole, Varun Gangal, Sebastian Gehrmann, Aadesh Gupta, Zhenhao Li, Saad Mahamood, Abinaya Mahendiran, Simon Mille, Ashish Shrivastava, Samson Tan, Tongshuang Wu, Jascha Sohl-Dickstein, Jinho D. Choi, Eduard Hovy, Ondrej Dusek, Sebastian Ruder, Sajant Anand, Nagender Aneja, Rabin Banjade, Lisa Barthe, Hanna Behnke, Ian Berlot-Attwell, Connor Boyle, Caroline Brun, Marco Antonio Sobrevilla Cabezudo, Samuel Cahyawijaya, Emile Chapuis, Wanxiang Che, Mukund Choudhary, Christian Clauss, Pierre Colombo, Filip Cornell, Gautier Dagan, Mayukh Das, Tanay Dixit, Thomas Dopierre, Paul-Alexis Dray, Suchitra Dubey, Tatiana Ekeinhor, Marco Di Giovanni, Tanya Goyal, Rishabh Gupta, Louanes Hamla, Sang Han, Fabrice Harel-Canada, Antoine Honore, Ishan Jindal, Przemyslaw K. Joniak, Denis Kleyko, Venelin Kovatchev, Kalpesh Krishna, Ashutosh Kumar, Stefan Langer, Seungjae Ryan Lee, Corey James Levinson, Hualou Liang, Kaizhao Liang, Zhexiong Liu, Andrey Lukyanenko, Vukosi Marivate, Gerard de Melo, Simon Meoni, Maxime Meyer, Afnan Mir, Nafise Sadat Moosavi, Niklas Muennighoff, Timothy Sum Hon Mun, Kenton Murray, Marcin Namysl, Maria Obedkova, Priti Oli, Nivranshu Pasricha, Jan Pfister, Richard Plant, Vinay Prabhu, Vasile Pais, Libo Qin, Shahab Raji, Pawan Kumar Rajpoot, Vikas Raunak, Roy Rinberg, Nicolas Roberts, Juan Diego Rodriguez, Claude Roux, Vasconcellos P. H. S., Ananya B. Sai, Robin M. Schmidt, Thomas Scialom, Tshephisho Sefara, Saqib N. Shamsi, Xudong Shen, Haoyue Shi, Yiwen Shi, Anna Shvets, Nick Siegel, Damien Sileo, Jamie Simon, Chandan Singh, Roman Sitelew, Priyank Soni, Taylor Sorensen, William Soto, Aman Srivastava, KV Aditya Srivatsa, Tony Sun, Mukund Varma T, A Tabassum, Fiona Anting Tan, Ryan Teehan, Mo Tiwari, Marie Tolkiehn, Athena Wang, Zijian Wang, Gloria Wang, Zijie J. Wang, Fuxuan Wei, Bryan Wilie, Genta Indra Winata, Xinyi Wu, Witold Wydmański, Tianbao Xie, Usama Yaseen, Michael A. Yee, Jing Zhang, Yue Zhang
Data augmentation is an important component in the robustness evaluation of models in natural language processing (NLP) and in enhancing the diversity of the data they are trained on.
no code implementations • IEEE 2021 • Hao Fei, Yafeng Ren, Yue Zhang, Donghong Ji
Aspect-based sentiment triplet extraction (ASTE) aims at recognizing the joint triplets from texts, i. e., aspect terms, opinion expressions, and correlated sentiment polarities.
Ranked #3 on
Aspect Sentiment Triplet Extraction
on ASTE-Data-V2
no code implementations • 30 Oct 2021 • Yanrui Niu, Jingyao Yang, Ankang Lu, Baojin Huang, Yue Zhang, Ji Huang, Shishi Wen, Dongshu Xu, Chao Liang, Zhongyuan Wang, Jun Chen
We will make a brief introduction of the experimental methods and results of the WHU-NERCMS in the TRECVID2021 in the paper.
1 code implementation • 23 Oct 2021 • Yue Zhang, Chao Liang, Longxiang Jiang
To address this issue, we propose a confidence-aware active feedback method (CAAF) that is specifically designed for online RF in interactive INS tasks.
no code implementations • EMNLP 2021 • Yue Zhang, Bo Zhang, Rui Wang, Junjie Cao, Chen Li, Zuyi Bao
Previous works on key information extraction from visually rich documents (VRDs) mainly focus on labeling the text within each bounding box (i. e., semantic entity), while the relations in-between are largely unexplored.
Ranked #2 on
Entity Linking
on FUNSD
1 code implementation • 15 Oct 2021 • Mohsen Yavartanoo, Shih-Hsuan Hung, Reyhaneh Neshatavar, Yue Zhang, Kyoung Mu Lee
3D shape representation and its processing have substantial effects on 3D shape recognition.
Ranked #1 on
3D Object Classification
on ModelNet10
1 code implementation • EMNLP 2021 • Jian Liu, Zhiyang Teng, Leyang Cui, Hanmeng Liu, Yue Zhang
Aspect category sentiment analysis has attracted increasing research attention.
no code implementations • 29 Sep 2021 • Xinbo Zhang, Changzhi Sun, Yue Zhang, Lei LI, Hao Zhou
Logical reasoning over natural text is an important capability towards human level intelligence.
1 code implementation • ACL 2022 • Leyang Cui, Sen yang, Yue Zhang
Besides, our method achieves state-of-the-art BERT-based performance on PTB (95. 92 F1) and strong performance on CTB (92. 31 F1).
Ranked #6 on
Constituency Parsing
on Penn Treebank
no code implementations • 14 Sep 2021 • Pinyuan Zhong, Yue Zhang, Xiaoying Tang
The hippocampal surface was then generated from the mean shape and the shape variation parameters.
1 code implementation • EMNLP 2021 • Leyang Cui, Yu Wu, Shujie Liu, Yue Zhang
To deal with this problem, instead of introducing knowledge base as the input, we force the model to learn a better semantic representation by predicting the information in the knowledge base, only based on the input context.
1 code implementation • EMNLP 2021 • Leonardo F. R. Ribeiro, Jonas Pfeiffer, Yue Zhang, Iryna Gurevych
Recent work on multilingual AMR-to-text generation has exclusively focused on data augmentation strategies that utilize silver AMR.
no code implementations • 15 Aug 2021 • Cunxiang Wang, Boyuan Zheng, Yuchen Niu, Yue Zhang
To quantitatively and intuitively explore the generalization ability of pre-trained language models (PLMs), we have designed several tasks of arithmetic and logical reasoning.
no code implementations • 2 Aug 2021 • Yue Zhang, Chengtao Peng, Liying Peng, Huimin Huang, Ruofeng Tong, Lanfen Lin, Jingsong Li, Yen-Wei Chen, Qingqing Chen, Hongjie Hu, Zhiyi Peng
In this work, we propose a novel LiTS method to adequately aggregate multi-phase information and refine uncertain region segmentation.
1 code implementation • ACL 2021 • Qiankun Fu, Linfeng Song, Wenyu Du, Yue Zhang
Although parsing to Abstract Meaning Representation (AMR) has become very popular and AMR has been shown effective on the many sentence-level downstream tasks, little work has studied how to generate AMRs that can represent multi-sentence information.
no code implementations • 31 Jul 2021 • Yue Zhang, Yajie Zou, Lingtao Wuand Wanbing Han
This study develops a primitive-based framework to identify the driving patterns during merging processes and reveal the evolutionary mechanism at freeway on-ramps in congested traffic flow.
1 code implementation • 3 Jul 2021 • Yue Jin, Yue Zhang, Tao Qin, Xudong Zhang, Jian Yuan, Houqiang Li, Tie-Yan Liu
Inspired by the two observations, in this work, we study a new problem, supervised off-policy ranking (SOPR), which aims to rank a set of target policies based on supervised learning by leveraging off-policy data and policies with known performance.
1 code implementation • ACL 2021 • Linyi Yang, Jiazheng Li, Pádraig Cunningham, Yue Zhang, Barry Smyth, Ruihai Dong
While state-of-the-art NLP models have been achieving the excellent performance of a wide range of tasks in recent years, important questions are being raised about their robustness and their underlying sensitivity to systematic biases that may exist in their training and test data.
no code implementations • 29 Jun 2021 • Hongxiu Zhao, Xun Zhang, Faouzi Bader, Yue Zhang
Many algorithms for visible light positioning (VLP) localization do not consider the shapes of the transmitters, which leads to the impracticality of the algorithm and the low localization accuracy.
1 code implementation • Findings (ACL) 2021 • Leyang Cui, Yu Wu, Jian Liu, Sen yang, Yue Zhang
To address the issue, we propose a template-based method for NER, treating NER as a language model ranking problem in a sequence-to-sequence framework, where original sentences and statement templates filled by candidate named entity span are regarded as the source sequence and the target sequence, respectively.
1 code implementation • ACL 2021 • Cunxiang Wang, Pai Liu, Yue Zhang
Recent work has investigated the interesting question using pre-trained language models (PLMs) as knowledge bases for answering open questions.
1 code implementation • NAACL 2021 • Qingrong Xia, Bo Zhang, Rui Wang, Zhenghua Li, Yue Zhang, Fei Huang, Luo Si, Min Zhang
Fine-grained opinion mining (OM) has achieved increasing attraction in the natural language processing (NLP) community, which aims to find the opinion structures of {``}Who expressed what opinions towards what{''} in one sentence.
1 code implementation • ACL 2021 • Guangsheng Bao, Yue Zhang, Zhiyang Teng, Boxing Chen, Weihua Luo
However, study shows that when we further enlarge the translation unit to a whole document, supervised training of Transformer can fail.
1 code implementation • ACL 2021 • Yafu Li, Yongjing Yin, Yulong Chen, Yue Zhang
Modern neural machine translation (NMT) models have achieved competitive performance in standard benchmarks such as WMT.
no code implementations • 22 May 2021 • Yue Zhang, Yajie Zou, Lingtao Wu
This study explores the spatiotemporal evolution law and risk formation mechanism of the LC interactive patterns and the findings are useful for comprehensively understanding the latent interactive patterns, improving the rationality and safety of autonomous vehicle's decision-making.
1 code implementation • ACL 2021 • Xuefeng Bai, Yulong Chen, Linfeng Song, Yue Zhang
Although neural models have achieved competitive results in dialogue systems, they have shown limited ability in representing core semantics, such as ignoring important entities.
Ranked #8 on
Dialog Relation Extraction
on DialogRE
1 code implementation • ACL 2021 • Wei Liu, Xiyan Fu, Yue Zhang, Wenming Xiao
Lexicon information and pre-trained models, such as BERT, have been combined to explore Chinese sequence labelling tasks due to their respective strengths.
no code implementations • ACL (splurobonlp) 2021 • Yue Zhang, Quan Guo, Parisa Kordjamshidi
Additionally, the experimental results demonstrate that explicit modeling of spatial semantic elements in the instructions can improve the grounding and spatial reasoning of the model.
1 code implementation • Findings (ACL) 2021 • Yulong Chen, Yang Liu, Liang Chen, Yue Zhang
Proposal of large-scale datasets has facilitated research on deep neural models for news summarization.
Abstractive Dialogue Summarization
Common Sense Reasoning
+3
1 code implementation • ACL 2021 • Tao Gui, Xiao Wang, Qi Zhang, Qin Liu, Yicheng Zou, Xin Zhou, Rui Zheng, Chong Zhang, Qinzhuo Wu, Jiacheng Ye, Zexiong Pang, Yongxin Zhang, Zhengyan Li, Ruotian Ma, Zichu Fei, Ruijian Cai, Jun Zhao, Xingwu Hu, Zhiheng Yan, Yiding Tan, Yuan Hu, Qiyuan Bian, Zhihua Liu, Bolin Zhu, Shan Qin, Xiaoyu Xing, Jinlan Fu, Yue Zhang, Minlong Peng, Xiaoqing Zheng, Yaqian Zhou, Zhongyu Wei, Xipeng Qiu, Xuanjing Huang
To guarantee user acceptability, all the text transformations are linguistically based, and we provide a human evaluation for each one.
1 code implementation • EMNLP 2021 • Leonardo F. R. Ribeiro, Yue Zhang, Iryna Gurevych
Pretrained language models (PLM) have recently advanced graph-to-text generation, where the input graph is linearized into a sequence and fed into the PLM to obtain its representation.
Ranked #1 on
Data-to-Text Generation
on AMR3.0
no code implementations • 12 Mar 2021 • Yixian Liu, Liwen Zhang, Wenjuan Han, Yue Zhang, Kewei Tu
We focus on CommonGen, the task of generating text based on a set of concepts, as a representative task of constrained text generation.