no code implementations • 18 Nov 2013 • Min Zhang, Lei Yang, Zheng-Hai Huang
Additionally, combining an effective heuristic for determining $n$-rank, we can also apply the proposed algorithm to solve MnRA when $n$-rank is unknown in advance.
no code implementations • 11 Feb 2015 • Yongfeng Zhang, Min Zhang, Yiqun Liu, Shaoping Ma
In this paper, we focus on the problem of phrase-level sentiment polarity labelling and attempt to bridge the gap between phrase-level and review-level sentiment analysis.
1 code implementation • EMNLP 2016 • Biao Zhang, Deyi Xiong, Jinsong Su, Qun Liu, Rongrong Ji, Hong Duan, Min Zhang
In order to perform efficient inference and learning, we introduce neural discourse relation models to approximate the prior and posterior distributions of the latent variable, and employ these approximated distributions to optimize a reparameterized variational lower bound.
1 code implementation • EMNLP 2016 • Biao Zhang, Deyi Xiong, Jinsong Su, Hong Duan, Min Zhang
Models of neural machine translation are often from a discriminative family of encoderdecoders that learn a conditional distribution of a target sentence given a source sentence.
no code implementations • 4 Aug 2016 • Qingrong Xia, Zhenghua Li, Jiayuan Chao, Min Zhang
This paper describes our system designed for the NLPCC 2016 shared task on word segmentation on micro-blog texts.
no code implementations • 29 Sep 2016 • Zhenghua Li, Yue Zhang, Jiayuan Chao, Min Zhang
The first approach is previously proposed to directly train a log-linear graph-based parser (LLGPar) with PA based on a forest-based objective.
no code implementations • 17 Oct 2016 • Xing Wang, Zhengdong Lu, Zhaopeng Tu, Hang Li, Deyi Xiong, Min Zhang
Neural Machine Translation (NMT) is a new approach to machine translation that has made great progress in recent years.
no code implementations • COLING 2016 • Fangyuan Li, Ruihong Huang, Deyi Xiong, Min Zhang
Aiming to resolve high complexities of event descriptions, previous work (Huang and Riloff, 2013) proposes multi-faceted event recognition and a bootstrapping method to automatically acquire both event facet phrases and event expressions from unannotated texts.
no code implementations • COLING 2016 • Wenliang Chen, Zhenjie Zhang, Zhenghua Li, Min Zhang
In this paper, we propose an approach to learn distributed representations of users and items from text comments for recommendation systems.
no code implementations • COLING 2016 • Haiqing Tang, Deyi Xiong, Min Zhang, ZhengXian Gong
In this paper, we study semantic dependencies between verbs and their arguments by modeling selectional preferences in the context of machine translation.
no code implementations • COLING 2016 • Biao Zhang, Deyi Xiong, Jinsong Su, Hong Duan, Min Zhang
Parallel sentence representations are important for bilingual and cross-lingual tasks in natural language processing.
no code implementations • ACL 2017 • Junhui Li, Deyi Xiong, Zhaopeng Tu, Muhua Zhu, Min Zhang, Guodong Zhou
Even though a linguistics-free sequence to sequence model in neural machine translation (NMT) has certain capability of implicitly learning syntactic information of source sentences, this paper shows that source syntax can be explicitly incorporated into NMT effectively to provide further improvements.
no code implementations • EMNLP 2017 • Xing Wang, Zhaopeng Tu, Deyi Xiong, Min Zhang
Otherwise, the NMT decoder generates a word from the vocabulary as the general NMT decoder does.
no code implementations • EMNLP 2017 • Chen Gong, Zhenghua Li, Min Zhang, Xinzhou Jiang
Traditionally, word segmentation (WS) adopts the single-grained formalism, where a sentence corresponds to a single word sequence.
no code implementations • IJCNLP 2017 • Yue Zhang, Zhenghua Li, Jun Lang, Qingrong Xia, Min Zhang
This paper describes and compares two straightforward approaches for dependency parsing with partial annotations (PA).
no code implementations • 11 Jan 2018 • Kai Song, Yue Zhang, Min Zhang, Weihua Luo
Neural machine translation (NMT) suffers a performance deficiency when a limited vocabulary fails to cover the source or target side adequately, which happens frequently when dealing with morphologically rich languages.
no code implementations • 11 Jan 2018 • Zhengqiu He, Wenliang Chen, Zhenghua Li, Meishan Zhang, Wei zhang, Min Zhang
First, we encode the context of entities on a dependency tree as sentence-level entity embedding based on tree-GRU.
no code implementations • 16 Jan 2018 • YaoSheng Yang, Meishan Zhang, Wenliang Chen, Wei zhang, Haofen Wang, Min Zhang
To quickly obtain new labeled data, we can choose crowdsourcing as an alternative way at lower cost in a short time.
Chinese Named Entity Recognition named-entity-recognition +2
no code implementations • International Joint Conferences on Artificial Intelligence Organization 2018 • Jingjing Wang, Jie Li, Shoushan Li, Yangyang Kang, Min Zhang, Luo Si, Guodong Zhou
Aspect sentiment classification, a challenging taskin sentiment analysis, has been attracting more andmore attention in recent years.
no code implementations • 30 Jun 2018 • Min Zhang, Qianli Ma, Chengfeng Wen, Hai Chen, Deruo Liu, Xianfeng GU, Jie He, Xiaoyin Xu
The Wasserstein distance between the nodules is calculated based on our new spherical optimal mass transport, this new algorithm works directly on sphere by using spherical metric, which is much more accurate and efficient than previous methods.
no code implementations • WS 2018 • Nancy Chen, Rafael E. Banchs, Min Zhang, Xiangyu Duan, Haizhou Li
This report presents the results from the Named Entity Transliteration Shared Task conducted as part of The Seventh Named Entities Workshop (NEWS 2018) held at ACL 2018 in Melbourne, Australia.
no code implementations • WS 2018 • Nancy Chen, Xiangyu Duan, Min Zhang, Rafael E. Banchs, Haizhou Li
Transliteration is defined as phonetic translation of names across languages.
no code implementations • ACL 2018 • Xinzhou Jiang, Zhenghua Li, Bo Zhang, Min Zhang, Sheng Li, Luo Si
Treebank conversion is a straightforward and effective way to exploit various heterogeneous treebanks for boosting parsing performance.
no code implementations • 16 Jul 2018 • Dun Liang, Yuanchen Guo, Shaokui Zhang, Song-Hai Zhang, Peter Hall, Min Zhang, Shi-Min Hu
Combining LineNet and TTLane, we proposed a pipeline to model HD maps with crowdsourced data for the first time.
1 code implementation • COLING 2018 • Yaosheng Yang, Wenliang Chen, Zhenghua Li, Zhengqiu He, Min Zhang
A bottleneck problem with Chinese named entity recognition (NER) in new domains is the lack of annotated data.
Chinese Named Entity Recognition named-entity-recognition +5
no code implementations • COLING 2018 • Lu Wang, Shoushan Li, Changlong Sun, Luo Si, Xiaozhong Liu, Min Zhang, Guodong Zhou
Question-Answer (QA) matching is a fundamental task in the Natural Language Processing community.
1 code implementation • COLING 2018 • Yachao Li, Junhui Li, Min Zhang
In the popular sequence to sequence (seq2seq) neural machine translation (NMT), there exist many weighted sum models (WSMs), each of which takes a set of input and generates one output.
1 code implementation • EMNLP 2018 • Yang Xu, Yu Hong, Huibin Ruan, Jianmin Yao, Min Zhang, Guodong Zhou
We tackle discourse-level relation recognition, a problem of determining semantic relations between text spans.
no code implementations • EMNLP 2018 • Chenlin Shen, Changlong Sun, Jingjing Wang, Yangyang Kang, Shoushan Li, Xiaozhong Liu, Luo Si, Min Zhang, Guodong Zhou
On the basis, we propose a three-stage hierarchical matching network to explore deep sentiment information in a QA text pair.
2 code implementations • 5 Oct 2018 • C. -C. Jay Kuo, Min Zhang, Siyang Li, Jiali Duan, Yueru Chen
To construct convolutional layers, we develop a new signal transform, called the Saab (Subspace Approximation with Adjusted Bias) transform.
3 code implementations • EMNLP 2018 • Jiacheng Zhang, Huanbo Luan, Maosong Sun, FeiFei Zhai, Jingfang Xu, Min Zhang, Yang Liu
Although the Transformer translation model (Vaswani et al., 2017) has achieved state-of-the-art performance in a variety of translation tasks, how to use document-level context to deal with discourse phenomena problematic for Transformer still remains a challenge.
no code implementations • 6 Feb 2019 • Yueru Chen, Yijing Yang, Min Zhang, C. -C. Jay Kuo
A semi-supervised learning framework using the feedforward-designed convolutional neural networks (FF-CNNs) is proposed for image classification in this work.
no code implementations • 20 Feb 2019 • Lu Tan, Ling Li, Wanquan Liu, Jie Sun, Min Zhang
Euler's Elastica based unsupervised segmentation models have strong capability of completing the missing boundaries for existing objects in a clean image, but they are not working well for noisy images.
no code implementations • IEEE 2019 • Yun Ju, Guangyu Sun, Quanhe Chen, Min Zhang, Huixian Zhu, Mujeeb Ur Rehman
In this paper, a new forecasting model based on a convolution neural network and LightGBM is constructed.
1 code implementation • 9 Mar 2019 • Weizhi Ma, Min Zhang, Yue Cao, Woojeong, Jin, Chenyang Wang, Yiqun Liu, Shaoping Ma, Xiang Ren
The framework encourages two modules to complement each other in generating effective and explainable recommendation: 1) inductive rules, mined from item-centric knowledge graphs, summarize common multi-hop relational patterns for inferring different item associations and provide human-readable explanation for model prediction; 2) recommendation module can be augmented by induced rules and thus have better generalization ability dealing with the cold-start issue.
no code implementations • 11 Mar 2019 • Wei Jiang, Zhenghua Li, Yu Zhang, Min Zhang
The key idea is to convert a UCCA semantic graph into a constituent tree, in which extra labels are deliberately designed to mark remote edges and discontinuous nodes for future recovery.
1 code implementation • NAACL 2019 • Kai Song, Yue Zhang, Heng Yu, Weihua Luo, Kun Wang, Min Zhang
Leveraging user-provided translation to constrain NMT has practical significance.
no code implementations • NAACL 2019 • Meishan Zhang, Zhenghua Li, Guohong Fu, Min Zhang
Syntax has been demonstrated highly effective in neural machine translation (NMT).
Ranked #8 on Machine Translation on IWSLT2015 English-Vietnamese
no code implementations • 27 May 2019 • Zhao Zhang, Weiming Jiang, Jie Qin, Li Zhang, Fanzhang Li, Min Zhang, Shuicheng Yan
Then we compute a linear classifier based on the approximated sparse codes by an analysis mechanism to simultaneously consider the classification and representation powers.
no code implementations • SEMEVAL 2019 • Wei Jiang, Zhenghua Li, Yu Zhang, Min Zhang
The key idea is to convert a UCCA semantic graph into a constituent tree, in which extra labels are deliberately designed to mark remote edges and discontinuous nodes for future recovery.
Ranked #1 on UCCA Parsing on SemEval 2019 Task 1
1 code implementation • ACL 2019 • Xiangyu Duan, Mingming Yin, Min Zhang, Boxing Chen, Weihua Luo
But there is no cross-lingual parallel corpus, whose source sentence language is different to the summary language, to directly train a cross-lingual ASSUM system.
no code implementations • ACL 2019 • Jingjing Wang, Changlong Sun, Shoushan Li, Xiaozhong Liu, Luo Si, Min Zhang, Guodong Zhou
This paper extends the research to interactive reviews and proposes a new research task, namely Aspect Sentiment Classification towards Question-Answering (ASC-QA), for real-world applications.
no code implementations • ACL 2019 • Mingming Yang, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Min Zhang, Tiejun Zhao
The training objective of neural machine translation (NMT) is to minimize the loss between the words in the translated sentences and those in the references.
1 code implementation • ACL 2019 • Zhenghua Li, Xue Peng, Min Zhang, Rui Wang, Luo Si
During the past decades, due to the lack of sufficient labeled data, most studies on cross-domain parsing focus on unsupervised domain adaptation, assuming there is no target-domain training data.
1 code implementation • 22 Jul 2019 • Qingrong Xia, Zhenghua Li, Min Zhang, Meishan Zhang, Guohong Fu, Rui Wang, Luo Si
Semantic role labeling (SRL), also known as shallow semantic parsing, is an important yet challenging task in NLP.
1 code implementation • 30 Jul 2019 • Haitao Wang, Zhengqiu He, Jin Ma, Wenliang Chen, Min Zhang
Our data is the first dataset for inter-personal relationship extraction.
3 code implementations • 30 Jul 2019 • Min Zhang, Haoxuan You, Pranav Kadam, Shan Liu, C. -C. Jay Kuo
In the attribute building stage, we address the problem of unordered point cloud data using a space partitioning procedure and developing a robust descriptor that characterizes the relationship between a point and its one-hop neighbor in a PointHop unit.
no code implementations • IJCNLP 2019 • Xiabing Zhou, Zhongqing Wang, Shoushan Li, Guodong Zhou, Min Zhang
Accordingly, we propose a Neural Personal Discrimination (NPD) approach to address above challenges by determining personal attributes from posts, and connecting relevant posts with similar attributes to jointly learn their emotions.
1 code implementation • 29 Aug 2019 • Haitao Wang, Zhengqiu He, Tong Zhu, Hao Shao, Wenliang Chen, Min Zhang
In this paper, we present the task definition, the description of data and the evaluation methodology used during this shared task.
1 code implementation • IJCNLP 2019 • Jie Zhu, Junhui Li, Muhua Zhu, Longhua Qian, Min Zhang, Guodong Zhou
Recent studies on AMR-to-text generation often formalize the task as a sequence-to-sequence (seq2seq) learning problem by converting an Abstract Meaning Representation (AMR) graph into a word sequence.
no code implementations • 17 Oct 2019 • Shaoyun Shi, Hanxiong Chen, Min Zhang, Yongfeng Zhang
The fundamental idea behind the design of most neural networks is to learn similarity patterns from data for prediction and inference, which lacks the ability of logical reasoning.
no code implementations • IJCNLP 2019 • Jingjing Wang, Changlong Sun, Shoushan Li, Jiancheng Wang, Luo Si, Min Zhang, Xiaozhong Liu, Guodong Zhou
This approach incorporates clause selection and word selection strategies to tackle the data noise problem in the task of DASC.
1 code implementation • IJCNLP 2019 • Xiangyu Duan, Hoongfei Yu, Mingming Yin, Min Zhang, Weihua Luo, Yue Zhang
We propose a contrastive attention mechanism to extend the sequence-to-sequence framework for abstractive sentence summarization task, which aims to generate a brief summary of a given source sentence.
no code implementations • CONLL 2019 • Yue Zhang, Wei Jiang, Qingrong Xia, Junjie Cao, Rui Wang, Zhenghua Li, Min Zhang
Our final submission ranks the third on the overall MRP evaluation metric, the first on EDS and the second on UCCA.
1 code implementation • IJCNLP 2019 • Qingrong Xia, Zhenghua Li, Min Zhang
In this paper, we adopt a simple unified span-based model for both span-based and word-based Chinese SRL as a strong baseline.
no code implementations • 20 Nov 2019 • Huan Zhang, Zhao Zhang, Mingbo Zhao, Qiaolin Ye, Min Zhang, Meng Wang
Our method can jointly re-cover the underlying clean data, clean labels and clean weighting spaces by decomposing the original data, predicted soft labels or weights into a clean part plus an error part by fitting noise.
no code implementations • 28 Nov 2019 • Zhengqiu He, Wenliang Chen, Yuyi Wang, Wei zhang, Guanchun Wang, Min Zhang
We present a novel approach to improve the performance of distant supervision relation extraction with Positive and Unlabeled (PU) Learning.
no code implementations • 3 Dec 2019 • Baijun Ji, Zhirui Zhang, Xiangyu Duan, Min Zhang, Boxing Chen, Weihua Luo
However, existing transfer methods involving a common target language are far from success in the extreme scenario of zero-shot translation, due to the language space mismatch problem between transferor (the parent model) and transferee (the child model) on the source side.
no code implementations • IEEE Access 2019 • Rui Wang, Bicheng Li, Shengwei Hu, Wenqian Du, Min Zhang
However, these methods assign the same weights on the relation path in the knowledge graph and ignore the rich information presented in neighbor nodes, which result in incomplete mining of triple features.
Ranked #12 on Link Prediction on WN18RR
no code implementations • ECCV 2020 • Dongsheng An, Yang Guo, Min Zhang, Xin Qi, Na lei, Shing-Tung Yau, Xianfeng GU
Though generative adversarial networks (GANs) areprominent models to generate realistic and crisp images, they often encounter the mode collapse problems and arehard to train, which comes from approximating the intrinsicdiscontinuous distribution transform map with continuousDNNs.
no code implementations • 22 Jan 2020 • Kun He, Min Zhang, Jianrong Zhou, Yan Jin, Chu-min Li
Inspired by its success in deep learning, we apply the idea of SGD with batch selection of samples to a classic optimization problem in decision version.
2 code implementations • 9 Feb 2020 • Min Zhang, Yifan Wang, Pranav Kadam, Shan Liu, C. -C. Jay Kuo
The PointHop method was recently proposed by Zhang et al. for 3D point cloud classification with unsupervised feature extraction.
1 code implementation • 6 Mar 2020 • Houquan Zhou, Yu Zhang, Zhenghua Li, Min Zhang
In the pre deep learning era, part-of-speech tags have been considered as indispensable ingredients for feature engineering in dependency parsing.
no code implementations • 1 Apr 2020 • Jinshan Zeng, Min Zhang, Shao-Bo Lin
Boosting is a well-known method for improving the accuracy of weak learners in machine learning.
2 code implementations • ACL 2020 • Yu Zhang, Zhenghua Li, Min Zhang
Experiments and analysis on 27 datasets from 13 languages clearly show that techniques developed before the DL era, such as structural learning (global TreeCRF loss) and high-order modeling are still useful, and can further boost parsing performance over the state-of-the-art biaffine parser, especially for partially annotated training data.
Ranked #1 on Dependency Parsing on CoNLL-2009
2 code implementations • WWW 2020 • Chong Chen, Min Zhang, Weizhi Ma, Yiqun Liu, and Shaoping Ma Department of Computer Science and Technology, Institute for Articial Intelligence, Beijing National Research Center for Information Science and Technology, Tsinghua University cc17@mails.tsinghua.edu.cn, z-m@tsinghua.edu.cn
Factorization Machines (FM) with negative sampling is a popular solution for context-aware recommendation.
3 code implementations • 28 Jun 2020 • Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Min Zhang, Shaoping Ma
Although exact term match between queries and documents is the dominant method to perform first-stage retrieval, we propose a different approach, called RepBERT, to represent documents and queries with fixed-length contextualized embeddings.
no code implementations • ACL 2020 • Xiao Chen, Changlong Sun, Jingjing Wang, Shoushan Li, Luo Si, Min Zhang, Guodong Zhou
This justifies the importance of the document-level sentiment preference information to ASC and the effectiveness of our approach capturing such information.
2 code implementations • 1 Jul 2020 • Chong Chen, Min Zhang, Weizhi Ma, Yiqun Liu, and Shaoping Ma
However, existing KG enhanced recommendation methods have largely focused on exploring advanced neural network architectures to better investigate the structural information of KG.
no code implementations • ACL 2020 • Bo Zhang, Yue Zhang, Rui Wang, Zhenghua Li, Min Zhang
The experimental results show that syntactic information is highly valuable for ORL, and our final MTL model effectively boosts the F1 score by 9. 29 over the syntax-agnostic baseline.
1 code implementation • ACL 2020 • Xiangyu Duan, Baijun Ji, Hao Jia, Min Tan, Min Zhang, Boxing Chen, Weihua Luo, Yue Zhang
In this paper, we propose a new task of machine translation (MT), which is based on no parallel sentences but can refer to a ground-truth bilingual dictionary.
1 code implementation • 12 Jul 2020 • Feiyu Yang, Zhan Song, Zhenzhong Xiao, Yu Chen, Zhe Pan, Min Zhang, Min Xue, Yaoyang Mo, Yao Zhang, Guoxiong Guan, Beibei Qian
Recently, the leading performance of human pose estimation is dominated by heatmap based methods.
no code implementations • 16 Jul 2020 • Wenjie Wan, Zhaodi Zhang, Yiwei Zhu, Min Zhang, Fu Song
The key insight of our approach is that the robustness verification problem of DNNs can be solved by verifying sub-problems of DNNs, one per target label.
3 code implementations • 20 Aug 2020 • Shaoyun Shi, Hanxiong Chen, Weizhi Ma, Jiaxin Mao, Min Zhang, Yongfeng Zhang
Both reasoning and generalization ability are important for prediction tasks such as recommender systems, where reasoning provides strong connection between user history and target items for accurate prediction, and generalization helps the model to draw a robust user portrait over noisy inputs.
no code implementations • 2 Sep 2020 • Pranav Kadam, Min Zhang, Shan Liu, C. -C. Jay Kuo
An unsupervised point cloud registration method, called salient points analysis (SPA), is proposed in this work.
no code implementations • 2 Sep 2020 • Min Zhang, Pranav Kadam, Shan Liu, C. -C. Jay Kuo
The UFF method exploits statistical correlations of points in a point cloud set to learn shape and point features in a one-pass feedforward manner through a cascaded encoder-decoder architecture.
1 code implementation • 10 Sep 2020 • Siteng Huang, Min Zhang, Yachen Kang, Donglin Wang
However, these approaches only augment the representations of samples with available semantics while ignoring the query set, which loses the potential for the improvement and may lead to a shift between the modalities combination and the pure-visual representation.
1 code implementation • 1 Oct 2020 • Ding-Nan Zo, Song-Hai Zhang, Tai-Jiang M, Min Zhang
It is currently the largest dataset for fine-grained classification of dogs, including130 dog breeds and 70, 428 real-world images.
1 code implementation • EMNLP 2020 • Dongqin Xu, Junhui Li, Muhua Zhu, Min Zhang, Guodong Zhou
In the literature, the research on abstract meaning representation (AMR) parsing is much restricted by the size of human-curated dataset which is critical to build an AMR parser with good performance.
Ranked #15 on AMR Parsing on LDC2017T10 (using extra training data)
no code implementations • 17 Oct 2020 • Min Zhang, Yao Shu, Kun He
Finite-sum optimization plays an important role in the area of machine learning, and hence has triggered a surge of interest in recent years.
2 code implementations • 20 Oct 2020 • Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Min Zhang, Shaoping Ma
Through this process, it teaches the DR model how to retrieve relevant documents from the entire corpus instead of how to rerank a potentially biased sample of documents.
1 code implementation • COLING 2020 • Huaao Zhang, Shigui Qiu, Xiangyu Duan, Min Zhang
Neural machine translation with millions of parameters is vulnerable to unfamiliar inputs.
1 code implementation • COLING 2020 • Tong Zhu, Haitao Wang, Junjie Yu, Xiabing Zhou, Wenliang Chen, Wei zhang, Min Zhang
The experimental results show that the ranking lists of the comparison systems on the DS-labelled test data and human-annotated test data are different.
1 code implementation • 30 Oct 2020 • Yangyang Guo, Liqiang Nie, Zhiyong Cheng, Qi Tian, Min Zhang
Concretely, we design a novel interpretation scheme whereby the loss of mis-predicted frequent and sparse answers of the same question type is distinctly exhibited during the late training phase.
no code implementations • Findings of the Association for Computational Linguistics 2020 • WeiSheng Zhang, Kaisong Song, Yangyang Kang, Zhongqing Wang, Changlong Sun, Xiaozhong Liu, Shoushan Li, Min Zhang, Luo Si
As an important research topic, customer service dialogue generation tends to generate generic seller responses by leveraging current dialogue information.
1 code implementation • COLING 2020 • Junjie Yu, Tong Zhu, Wenliang Chen, Wei zhang, Min Zhang
In this paper, we propose an alternative approach to improve RE systems via enriching diverse expressions by relational paraphrase sentences.
no code implementations • COLING 2020 • Ying Li, Zhenghua Li, Min Zhang
The major challenge for current parsing research is to improve parsing performance on out-of-domain texts that are very different from the in-domain training data when there is only a small-scale out-domain labeled data.
no code implementations • COLING 2020 • Chen Gong, Zhenghua Li, Bowei Zou, Min Zhang
Detailed evaluation shows that our proposed model with weakly labeled data significantly outperforms the state-of-the-art MWS model by 1. 12 and 5. 97 on NEWS and BAIKE data in F1.
1 code implementation • COLING 2020 • Qingrong Xia, Rui Wang, Zhenghua Li, Yue Zhang, Min Zhang
Recently, due to the interplay between syntax and semantics, incorporating syntactic knowledge into neural semantic role labeling (SRL) has achieved much attention.
no code implementations • COLING 2020 • Huibin Ruan, Yu Hong, Yang Xu, Zhen Huang, Guodong Zhou, Min Zhang
We tackle implicit discourse relation recognition.
no code implementations • 24 Dec 2020 • Yunqiu Shao, Bulou Liu, Jiaxin Mao, Yiqun Liu, Min Zhang, Shaoping Ma
We participated in the two case law tasks, i. e., the legal case retrieval task and the legal case entailment task.
1 code implementation • Findings (ACL) 2021 • Hongqiu Wu, Hai Zhao, Min Zhang
Code summarization (CS) is becoming a promising area in recent language understanding, which aims to generate sensible human language automatically for programming language in the format of source code, serving in the most convenience of programmer developing.
5 code implementations • ACL 2021 • Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou
Pre-training of text and layout has proved effective in a variety of visually-rich document understanding tasks due to its effective model architecture and the advantage of large-scale unlabeled scanned/digital-born documents.
Ranked #1 on Key Information Extraction on SROIE
no code implementations • ICCV 2021 • Min Zhang, Yang Guo, Na lei, Zhou Zhao, Jianfeng Wu, Xiaoyin Xu, Yalin Wang, Xianfeng GU
Shape analysis has been playing an important role in early diagnosis and prognosis of neurodegenerative diseases such as Alzheimer's diseases (AD).
1 code implementation • EMNLP 2021 • Kun Wu, Lijie Wang, Zhenghua Li, Ao Zhang, Xinyan Xiao, Hua Wu, Min Zhang, Haifeng Wang
For better distribution matching, we require that at least 80% of SQL patterns in the training data are covered by generated queries.
1 code implementation • 15 Mar 2021 • Pranav Kadam, Min Zhang, Shan Liu, C. -C. Jay Kuo
Inspired by the recent PointHop classification method, an unsupervised 3D point cloud registration method, called R-PointHop, is proposed in this work.
no code implementations • 17 Mar 2021 • Juntao Li, Chang Liu, Chongyang Tao, Zhangming Chan, Dongyan Zhao, Min Zhang, Rui Yan
To fill the gap between these up-to-date methods and the real-world applications, we incorporate user-specific dialogue history into the response selection and propose a personalized hybrid matching network (PHMN).
no code implementations • NeurIPS 2021 • Hongqiu Wu, Hai Zhao, Min Zhang
Beyond the success story of pre-trained language models (PrLMs) in recent natural language processing, they are susceptible to over-fitting due to unusual large model size.
4 code implementations • 16 Apr 2021 • Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, Shaoping Ma
ADORE replaces the widely-adopted static hard negative sampling method with a dynamic one to directly optimize the ranking performance.
1 code implementation • ACL 2021 • Chen Gong, Saihao Huang, Houquan Zhou, Zhenghua Li, Min Zhang, Zhefeng Wang, Baoxing Huai, Nicholas Jing Yuan
Several previous works on syntactic parsing propose to annotate shallow word-internal structures for better utilizing character-level information.
1 code implementation • NAACL 2021 • Qingrong Xia, Bo Zhang, Rui Wang, Zhenghua Li, Yue Zhang, Fei Huang, Luo Si, Min Zhang
Fine-grained opinion mining (OM) has achieved increasing attraction in the natural language processing (NLP) community, which aims to find the opinion structures of {``}Who expressed what opinions towards what{''} in one sentence.
1 code implementation • Findings (ACL) 2021 • Jinpeng Zhang, Baijun Ji, Nini Xiao, Xiangyu Duan, Min Zhang, Yangbin Shi, Weihua Luo
Bilingual Lexicon Induction (BLI) aims to map words in one language to their translations in another, and is typically through learning linear projections to align monolingual word representation spaces.
no code implementations • ACL 2021 • Xin Liu, Baosong Yang, Dayiheng Liu, Haibo Zhang, Weihua Luo, Min Zhang, Haiying Zhang, Jinsong Su
A well-known limitation in pretrain-finetune paradigm lies in its inflexibility caused by the one-size-fits-all vocabulary.
2 code implementations • 11 Jun 2021 • Bin Hao, Min Zhang, Weizhi Ma, Shaoyun Shi, Xinxing Yu, Houzhi Shan, Yiqun Liu, Shaoping Ma
To the best of our knowledge, this is the largest real-world interaction dataset for personalized recommendation.
no code implementations • 13 Jun 2021 • Peng Jin, Min Zhang, Jianwen Li, Li Han, Xuejun Wen
Formally verifying Deep Reinforcement Learning (DRL) systems is a challenging task due to the dynamic continuity of system behaviors and the black-box feature of embedded neural networks.
9 code implementations • NeurIPS 2021 • Xiaobo Liang, Lijun Wu, Juntao Li, Yue Wang, Qi Meng, Tao Qin, Wei Chen, Min Zhang, Tie-Yan Liu
Dropout is a powerful and widely used technique to regularize the training of deep neural networks.
Ranked #4 on Machine Translation on WMT2014 English-French
no code implementations • 16 Jul 2021 • Pengju Zhang, Yonghui Jia, Muhua Zhu, Wenliang Chen, Min Zhang
Previous works for encoding questions mainly focus on the word sequences, but seldom consider the information from syntactic trees. In this paper, we propose an approach to learn syntax-based representations for KBQA.
1 code implementation • Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence 2021 • An-Hui Wang, Linfeng Song, Hui Jiang, Shaopeng Lai, Junfeng Yao, Min Zhang, Jinsong Su
Conversational discourse structures aim to describe how a dialogue is organised, thus they are helpful for dialogue understanding and response generation.
Ranked #3 on Discourse Parsing on STAC
no code implementations • ACL 2021 • Dongqin Xu, Junhui Li, Muhua Zhu, Min Zhang, Guodong Zhou
We hope that knowledge gained while learning for English AMR parsing and text generation can be transferred to the counterparts of other languages.
no code implementations • ACL 2021 • Linqing Chen, Junhui Li, ZhengXian Gong, Boxing Chen, Weihua Luo, Min Zhang, Guodong Zhou
To this end, we propose two pre-training tasks.
5 code implementations • 2 Aug 2021 • Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, Shaoping Ma
Compared with previous DR models that use brute-force search, JPQ almost matches the best retrieval performance with 30x compression on index size.
1 code implementation • 3 Aug 2021 • Ziyi Ye, Xiaohui Xie, Yiqun Liu, Zhihong Wang, Xuesong Chen, Min Zhang, Shaoping Ma
In this paper, we carefully design a lab-based user study to investigate brain activities during reading comprehension.
no code implementations • 9 Aug 2021 • Minghan Wang, Yuxia Wang, Chang Su, Jiaxin Guo, Yingtao Zhang, Yujia Liu, Min Zhang, Shimin Tao, Xingshan Zeng, Liangyou Li, Hao Yang, Ying Qin
This paper describes our work in participation of the IWSLT-2021 offline speech translation task.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +5
no code implementations • 1 Sep 2021 • Xiaotian Jiang, Danshi Wang, Qirui Fan, Min Zhang, Chao Lu, Alan Pak Tao Lau
A physics-informed neural network (PINN) that combines deep learning with physics is studied to solve the nonlinear Schr\"odinger equation for learning nonlinear dynamics in fiber optics.
no code implementations • 22 Sep 2021 • Ziyi Ye, Xiaohui Xie, Yiqun Liu, Zhihong Wang, Xuancheng Li, Jiaji Li, Xuesong Chen, Min Zhang, Shaoping Ma
Inspired by these findings, we conduct supervised learning tasks to estimate the usefulness of non-click results with brain signals and conventional information (i. e., content and context factors).
no code implementations • 24 Sep 2021 • Min Zhang, Pranav Kadam, Shan Liu, C. -C. Jay Kuo
It is named GSIP (Green Segmentation of Indoor Point clouds) and its performance is evaluated on a representative large-scale benchmark -- the Stanford 3D Indoor Segmentation (S3DIS) dataset.
no code implementations • 29 Sep 2021 • Jiadong Lin, Yifeng Xiong, Min Zhang, John E. Hopcroft, Kun He
Black-box adversarial attack has attracted much attention for its practical use in deep learning applications, and it is very challenging as there is no access to the architecture and weights of the target model.
no code implementations • 29 Sep 2021 • Yue Wang, Lijun Wu, Xiaobo Liang, Juntao Li, Min Zhang
Starting from the resurgence of deep learning, language models (LMs) have never been so popular.
no code implementations • 29 Sep 2021 • Xiaobo Liang, Runze Mao, Lijun Wu, Juntao Li, Weiqing Liu, Qing Li, Min Zhang
The common approach of consistency training is performed on the data-level, which typically utilizes the data augmentation strategy (or adversarial training) to make the predictions from the augmented input and the original input to be consistent, so that the model is more robust and attains better generalization ability.
4 code implementations • 12 Oct 2021 • Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, Shaoping Ma
However, the efficiency of most existing DR models is limited by the large memory cost of storing dense vectors and the time-consuming nearest neighbor search (NNS) in vector space.
1 code implementation • COLING 2022 • Yu Zhang, Qingrong Xia, Shilin Zhou, Yong Jiang, Guohong Fu, Min Zhang
Semantic role labeling (SRL) is a fundamental yet challenging task in the NLP community.
Dependency Parsing Semantic Role Labeling (predicted predicates)
no code implementations • 14 Oct 2021 • Xuesong Chen, Ziyi Ye, Xiaohui Xie, Yiqun Liu, Weihang Su, Shuqi Zhu, Min Zhang, Shaoping Ma
While search technologies have evolved to be robust and ubiquitous, the fundamental interaction paradigm has remained relatively stable for decades.
1 code implementation • CVPR 2022 • Yifeng Xiong, Jiadong Lin, Min Zhang, John E. Hopcroft, Kun He
The black-box adversarial attack has attracted impressive attention for its practical use in the field of deep learning security.
no code implementations • 27 Nov 2021 • Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, Shaoping Ma
Dense Retrieval (DR) reaches state-of-the-art results in first-stage retrieval, but little is known about the mechanisms that contribute to its success.
1 code implementation • COLING 2022 • Shilin Zhou, Qingrong Xia, Zhenghua Li, Yu Zhang, Yu Hong, Min Zhang
Moreover, we propose a simple constrained Viterbi procedure to ensure the legality of the output graph according to the constraints of the SRL structure.
no code implementations • 8 Dec 2021 • Pranav Kadam, Min Zhang, Jiahao Gu, Shan Liu, C. -C. Jay Kuo
GreenPCO is an unsupervised learning method that predicts object motion by matching features of consecutive point cloud scans.
1 code implementation • 11 Dec 2021 • Tong Zhu, Xiaoye Qu, Wenliang Chen, Zhefeng Wang, Baoxing Huai, Nicholas Jing Yuan, Min Zhang
Most previous studies of document-level event extraction mainly focus on building argument chains in an autoregressive way, which achieves a certain success but is inefficient in both training and inference.
Ranked #3 on Document-level Event Extraction on ChFinAnn
1 code implementation • 13 Dec 2021 • Shiping Li, Min Cao, Min Zhang
In this paper, we propose a semantic-aligned embedding method for text-based person search, in which the feature alignment across modalities is achieved by automatically learning the semantic-aligned visual features and textual features.
Ranked #8 on Text based Person Retrieval on CUHK-PEDES
2 code implementations • 13 Dec 2021 • Chong Liu, Xiaoyang Liu, Rongqin Zheng, Lixin Zhang, Xiaobo Liang, Juntao Li, Lijun Wu, Min Zhang, Leyu Lin
State-of-the-art sequential recommendation models proposed very recently combine contrastive learning techniques for obtaining high-quality user representations.
no code implementations • 22 Dec 2021 • Jiaxin Guo, Minghan Wang, Daimeng Wei, Hengchao Shang, Yuxia Wang, Zongyao Li, Zhengzhe Yu, Zhanglin Wu, Yimeng Chen, Chang Su, Min Zhang, Lizhi Lei, Shimin Tao, Hao Yang
An effective training strategy to improve the performance of AT models is Self-Distillation Mixup (SDM) Training, which pre-trains a model on raw data, generates distilled data by the pre-trained model itself and finally re-trains a model on the combination of raw data and distilled data.
1 code implementation • 22 Dec 2021 • Changxing Wu, Liuwen Cao, Yubin Ge, Yang Liu, Min Zhang, Jinsong Su
Then, we employ a label sequence decoder to output the predicted labels in a top-down manner, where the predicted higher-level labels are directly used to guide the label prediction at the current level.
no code implementations • EAMT 2022 • Minghan Wang, Jiaxin Guo, Yuxia Wang, Daimeng Wei, Hengchao Shang, Chang Su, Yimeng Chen, Yinglu Li, Min Zhang, Shimin Tao, Hao Yang
In this paper, we aim to close the gap by preserving the original objective of AR and NAR under a unified framework.
no code implementations • 22 Dec 2021 • Zhengzhe Yu, Jiaxin Guo, Minghan Wang, Daimeng Wei, Hengchao Shang, Zongyao Li, Zhanglin Wu, Yuxia Wang, Yimeng Chen, Chang Su, Min Zhang, Lizhi Lei, Shimin Tao, Hao Yang
Deep encoders have been proven to be effective in improving neural machine translation (NMT) systems, but it reaches the upper bound of translation quality when the number of encoder layers exceeds 18.
1 code implementation • 18 Jan 2022 • Chong Chen, Fei Sun, Min Zhang, Bolin Ding
From the perspective of utility, if a system's utility is damaged by some bad data, the system needs to forget these data to regain utility.
no code implementations • ACL 2022 • Chulun Zhou, Fandong Meng, Jie zhou, Min Zhang, Hongji Wang, Jinsong Su
Most dominant neural machine translation (NMT) models are restricted to make predictions only according to the local context of preceding words in a left-to-right manner.
1 code implementation • Findings (ACL) 2022 • Yiming Zhang, Min Zhang, Sai Wu, Junbo Zhao
The aspect-based sentiment analysis (ABSA) is a fine-grained task that aims to determine the sentiment polarity towards targeted aspect terms occurring in the sentence.
Ranked #4 on Aspect-Based Sentiment Analysis (ABSA) on SemEval-2014 Task-4 (using extra training data)
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +3
1 code implementation • ACL 2022 • Bei Li, Quan Du, Tao Zhou, Yi Jing, Shuhan Zhou, Xin Zeng, Tong Xiao, Jingbo Zhu, Xuebo Liu, Min Zhang
Inspired by this, we design a new architecture, {\it ODE Transformer}, which is analogous to the Runge-Kutta method that is well motivated in ODE.
1 code implementation • Findings (ACL) 2022 • Houquan Zhou, Yang Li, Zhenghua Li, Min Zhang
In recent years, large-scale pre-trained language models (PLMs) have made extraordinary progress in most NLP tasks.
no code implementations • 28 Mar 2022 • Min Cao, Shiping Li, Juntao Li, Liqiang Nie, Min Zhang
On top of this, the efficiency-focused study on the ITR system is introduced as the third perspective.
no code implementations • 5 Apr 2022 • Yangkun Li, Weizhi Ma, Chong Chen, Min Zhang, Yiqun Liu, Shaoping Ma, Yuekui Yang
Among various methods of coping with overfitting, dropout is one of the representative ways.
1 code implementation • 6 Apr 2022 • Zhumin Chu, Qingyao Ai, Zhihong Wang, Yiqun Liu, Yingye Huang, Rui Zhang, Min Zhang, Shaoping Ma
This raises the question of how to accurately model user satisfaction in conversational search scenarios.
no code implementations • 16 Apr 2022 • Zheng Zhang, Liang Ding, Dazhao Cheng, Xuebo Liu, Min Zhang, DaCheng Tao
Data augmentations (DA) are the cores to achieving robust sequence-to-sequence learning on various natural language processing (NLP) tasks.
1 code implementation • 20 Apr 2022 • Yisheng Xiao, Lijun Wu, Junliang Guo, Juntao Li, Min Zhang, Tao Qin, Tie-Yan Liu
While NAR generation can significantly accelerate inference speed for machine translation, the speedup comes at the cost of sacrificed translation accuracy compared to its counterpart, autoregressive (AR) generation.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +11
1 code implementation • ACL 2022 • Xin Zhang, Guangwei Xu, Yueheng Sun, Meishan Zhang, Xiaobin Wang, Min Zhang
Recent works of opinion expression identification (OEI) rely heavily on the quality and scale of the manually-constructed training corpus, which could be extremely difficult to satisfy.
2 code implementations • NAACL 2022 • Yue Zhang, Zhenghua Li, Zuyi Bao, Jiacheng Li, Bo Zhang, Chen Li, Fei Huang, Min Zhang
This paper presents MuCGEC, a multi-reference multi-source evaluation dataset for Chinese Grammatical Error Correction (CGEC), consisting of 7, 063 sentences collected from three Chinese-as-a-Second-Language (CSL) learner sources.
no code implementations • 25 Apr 2022 • Fuchuan Tong, Siqi Zheng, Min Zhang, Yafeng Chen, Hongbin Suo, Qingyang Hong, Lin Li
In this work, we present a GCN-based approach for semi-supervised learning.
1 code implementation • 25 Apr 2022 • Jingtao Zhan, Xiaohui Xie, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, Shaoping Ma
For example, representation-based retrieval models perform almost as well as interaction-based retrieval models in terms of interpolation but not extrapolation.
1 code implementation • NAACL 2022 • Linzhi Wu, Pengjun Xie, Jie zhou, Meishan Zhang, Chunping Ma, Guangwei Xu, Min Zhang
Prior research has mainly resorted to heuristic rule-based constraints to reduce the noise for specific self-augmentation methods individually.
1 code implementation • NAACL 2022 • Yahui Liu, Haoping Yang, Chen Gong, Qingrong Xia, Zhenghua Li, Min Zhang
1) Based on a frame-free annotation methodology, we avoid writing complex frames for new predicates.
1 code implementation • 25 May 2022 • Yang Xu, Yutai Hou, Wanxiang Che, Min Zhang
On the newly defined cross-lingual model editing task, we empirically demonstrate the failure of monolingual baselines in propagating the edit to multiple languages and the effectiveness of the proposed language anisotropic model editing.
no code implementations • 1 Jun 2022 • Omri Isac, Clark Barrett, Min Zhang, Guy Katz
In this work, we present a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities: the generation of an easy-to-check witness of unsatisfiability, which attests to the absence of errors.
no code implementations • 8 Jun 2022 • Yifan Wang, Weizhi Ma, Min Zhang, Yiqun Liu, Shaoping Ma
First, we summarize fairness definitions in the recommendation and provide several views to classify fairness issues.
no code implementations • 17 Jun 2022 • Yu Zhao, Yunxin Li, Yuxiang Wu, Baotian Hu, Qingcai Chen, Xiaolong Wang, Yuxin Ding, Min Zhang
To mitigate this problem, we propose a medical response generation model with Pivotal Information Recalling (MedPIR), which is built on two components, i. e., knowledge-aware dialogue graph encoder and recall-enhanced generator.
no code implementations • 23 Jun 2022 • Yanxiang Jiang, Min Zhang, Fu-Chun Zheng, Yan Chen, Mehdi Bennis, Xiaohu You
In this paper, cooperative edge caching problem is studied in fog radio access networks (F-RANs).
1 code implementation • 25 Jun 2022 • Hongqiu Wu, Ruixue Ding, Hai Zhao, Pengjun Xie, Fei Huang, Min Zhang
Deep neural models (e. g. Transformer) naturally learn spurious features, which create a ``shortcut'' between the labels and inputs, thus impairing the generalization and robustness.
Ranked #1 on Machine Reading Comprehension on DREAM
Machine Reading Comprehension Named Entity Recognition (NER) +4
2 code implementations • 26 Jun 2022 • Chenyang Wang, Yuanqing Yu, Weizhi Ma, Min Zhang, Chong Chen, Yiqun Liu, Shaoping Ma
Then, we empirically analyze the learning dynamics of typical CF methods in terms of quantified alignment and uniformity, which shows that better alignment or uniformity both contribute to higher recommendation performance.
1 code implementation • 14 Jul 2022 • Min Zhang, Siteng Huang, Wenbin Li, Donglin Wang
To solve this problem, we present a plug-in Hierarchical Tree Structure-aware (HTS) method, which not only learns the relationship of FSL and pretext tasks, but more importantly, can adaptively select and aggregate feature representations generated by pretext tasks to maximize the performance of FSL tasks.
1 code implementation • 23 Jul 2022 • Qian Yang, Yunxin Li, Baotian Hu, Lin Ma, Yuxing Ding, Min Zhang
CSI), a relation inferrer, and a Lexical Constraint-aware Generator (arr.
1 code implementation • 24 Jul 2022 • Min Zhang, Zhihong Pan, Xin Zhou, C. -C. Jay Kuo
Normalizing flow models have been used successfully for generative image super-resolution (SR) by approximating complex distribution of natural images to simple tractable distribution in latent space through Invertible Neural Networks (INN).
1 code implementation • 11 Aug 2022 • Jingtao Zhan, Qingyao Ai, Yiqun Liu, Jiaxin Mao, Xiaohui Xie, Min Zhang, Shaoping Ma
By making the REM and DAMs disentangled, DDR enables a flexible training paradigm in which REM is trained with supervision once and DAMs are trained with unsupervised data.
1 code implementation • 17 Aug 2022 • Ziyi Ye, Xiaohui Xie, Yiqun Liu, Zhihong Wang, Xuesong Chen, Min Zhang, Shaoping Ma
We explore the effectiveness of BTA for satisfaction modeling in two popular information access scenarios, i. e., search and recommendation.
no code implementations • 21 Aug 2022 • Zhaodi Zhang, Yiting Wu, Si Liu, Jing Liu, Min Zhang
Considerable efforts have been devoted to finding the so-called tighter approximations to obtain more precise verification results.
no code implementations • 26 Aug 2022 • Saihao Huang, Lijie Wang, Zhenghua Li, Zeyang Liu, Chenhui Dou, Fukang Yan, Xinyan Xiao, Hua Wu, Min Zhang
As the first session-level Chinese dataset, CHASE contains two separate parts, i. e., 2, 003 sessions manually constructed from scratch (CHASE-C), and 3, 456 sessions translated from English SParC (CHASE-T).
no code implementations • 16 Sep 2022 • Min Zhang, Hongyao Tang, Jianye Hao, Yan Zheng
First, we propose a unified policy abstraction theory, containing three types of policy abstraction associated to policy features at different levels.
1 code implementation • COLING 2022 • Dan Qiao, Chenchen Dai, Yuyang Ding, Juntao Li, Qiang Chen, Wenliang Chen, Min Zhang
The conventional success of textual classification relies on annotated data, and the new paradigm of pre-trained language models (PLMs) still requires a few labeled data for downstream tasks.