no code implementations • LREC 2012 • Ting Liu, Samira Shaikh, Tomek Strzalkowski, Aaron Broadwell, Jennifer Stromer-Galley, Sarah Taylor, Umit Boz, Xiaoai Ren, Jingsi Wu
In this paper, we report our efforts in building a multi-lingual multi-party online chat corpus in order to develop a firm understanding in a set of social constructs such as agenda control, influence, and leadership as well as to computationally model such constructs in online interactions.
no code implementations • 28 Mar 2014 • Duyu Tang, Bing Qin, Ting Liu, Qiuhui Shi
In order to analyze the emotional changes in accordance with time and space, this paper presents an Emotion Analysis Platform (EAP), which explores the emotional distribution of each province, so that can monitor the global pulse of each province in China.
no code implementations • LREC 2014 • Ting Liu, Kit Cho, G. Aaron Broadwell, Samira Shaikh, Tomek Strzalkowski, John Lien, Sarah Taylor, Laurie Feldman, Boris Yamrom, Nick Webb, Umit Boz, Ignacio Cases, Ching-Sheng Lin
Unfortunately, word imageability ratings were collected for only a limited number of words: 9, 240 words in English, 6, 233 in Spanish; and are unavailable at all in the other two languages studied: Russian and Farsi.
no code implementations • LREC 2014 • Samira Shaikh, Tomek Strzalkowski, Ting Liu, George Aaron Broadwell, Boris Yamrom, Sarah Taylor, Laurie Feldman, Kit Cho, Umit Boz, Ignacio Cases, Yuliya Peshkova, Ching-Sheng Lin
Researchers in the field can use this resource as a reference of typical metaphors used across these cultures.
1 code implementation • 24 May 2015 • Ting Liu, Mojtaba Seyedhosseini, Tolga Tasdizen
Starting with over-segmenting superpixels, we use a tree structure to represent the hierarchy of region merging, by which we reduce the problem of segmenting image regions to finding a set of label assignment to tree nodes.
no code implementations • CVPR 2015 • Ting Liu, Gang Wang, Qingxiong Yang
However, the conventional correlation filter based trackers cannot deal with occlusion.
no code implementations • 25 Nov 2015 • Li Wang, Ting Liu, Gang Wang, Kap Luk Chan, Qingxiong Yang
The adaptation is conducted in both layers of the deep feature learning module so as to include appearance information of the specific target object.
10 code implementations • COLING 2016 • Duyu Tang, Bing Qin, Xiaocheng Feng, Ting Liu
Target-dependent sentiment classification remains a challenge: modeling the semantic relatedness of a target with its context words in a sentence.
Aspect-Based Sentiment Analysis (ABSA) General Classification +2
no code implementations • 18 Dec 2015 • Bing Qin, Duyu Tang, Xinwei Geng, Dandan Ning, Jiahao Liu, Ting Liu
Generating an article automatically with computer program is a challenging task in artificial intelligence and natural language processing.
no code implementations • 22 Dec 2015 • Jiuxiang Gu, Zhenhua Wang, Jason Kuen, Lianyang Ma, Amir Shahroudy, Bing Shuai, Ting Liu, Xingxing Wang, Li Wang, Gang Wang, Jianfei Cai, Tsuhan Chen
In the last few years, deep learning has led to very good performance on a variety of problems, such as visual recognition, speech recognition and natural language processing.
no code implementations • 5 Mar 2016 • Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, Ting Liu
Cross-lingual model transfer has been a promising approach for inducing dependency parsers for low-resource languages where annotated treebanks are not available.
Cross-lingual zero-shot dependency parsing Representation Learning
1 code implementation • 19 Apr 2016 • Yijia Liu, Wanxiang Che, Jiang Guo, Bing Qin, Ting Liu
Many natural language processing (NLP) tasks can be generalized into segmentation problem.
no code implementations • 20 Apr 2016 • Qingyu Yin, Wei-Nan Zhang, Yu Zhang, Ting Liu
This is because zero pronouns have no descriptive information, which results in difficulty in explicitly capturing their semantic similarities with antecedents.
no code implementations • LREC 2016 • Samira Shaikh, Kit Cho, Tomek Strzalkowski, Laurie Feldman, John Lien, Ting Liu, George Aaron Broadwell
The main contributions of this work are: 1) A general method for expansion and creation of lexicons with scores of words on psychological constructs such as valence, arousal or dominance; and 2) a procedure for ensuring validity of the newly constructed resources.
no code implementations • LREC 2016 • Ting Liu, Kit Cho, Tomek Strzalkowski, Samira Shaikh, Mehrdad Mirzaei
In this article, we present a method to validate a multi-lingual (English, Spanish, Russian, and Farsi) corpus on imageability ratings automatically expanded from MRCPD (Liu et al., 2014).
2 code implementations • 4 May 2016 • Mehran Javanmardi, Mehdi Sajjadi, Ting Liu, Tolga Tasdizen
This can be seen as a regularization term that promotes piecewise smoothness of the label probability vector image produced by the ConvNet during learning.
no code implementations • 7 May 2016 • Wei-Nan Zhang, Ting Liu, Qingyu Yin, Yu Zhang
Dropped pronouns (DPs) are ubiquitous in pro-drop languages like Chinese, Japanese etc.
no code implementations • 15 May 2016 • Bing Wang, Li Wang, Bing Shuai, Zhen Zuo, Ting Liu, Kap Luk Chan, Gang Wang
Then the Siamese CNN and temporally constrained metrics are jointly learned online to construct the appearance-based tracklet affinity models.
8 code implementations • EMNLP 2016 • Duyu Tang, Bing Qin, Ting Liu
Such importance degree and text representation are calculated with multiple computational layers, each of which is a neural attention model over an external memory.
Aspect-Based Sentiment Analysis (ABSA) General Classification +1
no code implementations • 3 Jun 2016 • Jiang Guo, Wanxiang Che, Haifeng Wang, Ting Liu
Various treebanks have been released for dependency parsing.
no code implementations • ACL 2017 • Ting Liu, Yiming Cui, Qingyu Yin, Wei-Nan Zhang, Shijin Wang, Guoping Hu
Most existing approaches for zero pronoun resolution are heavily relying on annotated data, which is often released by shared task organizers.
no code implementations • COLING 2016 • Yiming Cui, Ting Liu, Zhipeng Chen, Shijin Wang, Guoping Hu
Reading comprehension has embraced a booming in recent NLP research.
2 code implementations • ACL 2017 • Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, Guoping Hu
Cloze-style queries are representative problems in reading comprehension.
Ranked #3 on Question Answering on Children's Book Test
1 code implementation • 14 Aug 2016 • Ting Liu, Miaomiao Zhang, Mehran Javanmardi, Nisha Ramesh, Tolga Tasdizen
We then propose a Bayesian model that combines the supervised and the unsupervised information for probabilistic learning.
Electron Microscopy Image Segmentation Image Segmentation +2
no code implementations • 19 Aug 2016 • Qingfu Zhu, Wei-Nan Zhang, Lianqiang Zhou, Ting Liu
An obvious drawback of these work is that there is not a learnable relationship between words and the start symbol.
no code implementations • 10 Oct 2016 • Xiaofei Sun, Jiang Guo, Xiao Ding, Ting Liu
This paper investigates the problem of network embedding, which aims at learning low-dimensional vector representation of nodes in networks.
no code implementations • 7 Nov 2016 • Luyang Li, Bing Qin, Wenjing Ren, Ting Liu
We use feedforward memory network and feedback memory network to learn the representation of the credibility of statements which are about the same object.
no code implementations • 28 Nov 2016 • Bing Shuai, Ting Liu, Gang Wang
In addition, dense skip connections are added so that the context network can be effectively optimized.
no code implementations • COLING 2016 • Yang Li, Ting Liu, Jing Jiang, Liang Zhang
Microblogging services allow users to create hashtags to categorize their posts.
no code implementations • COLING 2016 • Jiang Guo, Wanxiang Che, Haifeng Wang, Ting Liu, Jun Xu
This paper describes a unified neural architecture for identifying and classifying multi-typed semantic relations between words in a sentence.
no code implementations • COLING 2016 • Wei Song, Tong Liu, Ruiji Fu, Lizhen Liu, Hanshi Wang, Ting Liu
Parallelism is an important rhetorical device.
no code implementations • COLING 2016 • Shaolei Wang, Wanxiang Che, Ting Liu
We treat disfluency detection as a sequence-to-sequence problem and propose a neural attention-based model which can efficiently model the long-range dependencies between words and make the resulting sentence more likely to be grammatically correct.
no code implementations • COLING 2016 • Jiang Guo, Wanxiang Che, Haifeng Wang, Ting Liu
Various treebanks have been released for dependency parsing.
no code implementations • COLING 2016 • Wei Song, Ruiji Fu, Lizhen Liu, Hanshi Wang, Ting Liu
More importantly, we uncover the anecdote implication, which reveals the meaning and topic of an anecdote.
no code implementations • COLING 2016 • Xiao Ding, Yue Zhang, Ting Liu, Junwen Duan
Representing structured events as vectors in continuous space offers a new way for defining dense features for natural language processing (NLP) applications.
no code implementations • COLING 2016 • Xiaocheng Feng, Duyu Tang, Bing Qin, Ting Liu
Knowledge base (KB) such as Freebase plays an important role for many natural language processing tasks.
no code implementations • WS 2016 • Bo Zheng, Wanxiang Che, Jiang Guo, Ting Liu
This paper introduces our Chinese Grammatical Error Diagnosis (CGED) system in the NLP-TEA-3 shared task for CGED.
no code implementations • 9 Jan 2017 • Wei-Nan Zhang, Ting Liu, Yifa Wang, Qingfu Zhu
Moreover, the lexical divergence of the responses generated by the 5 personalized models indicates that the proposed two-phase approach achieves good results on modeling the responding style of human and generating personalized responses for the conversational systems.
no code implementations • ACL 2017 • Wei Song, Dong Wang, Ruiji Fu, Lizhen Liu, Ting Liu, Guoping Hu
Evaluation results show that discourse modes can be identified automatically with an average F1-score of 0. 7.
no code implementations • SEMEVAL 2017 • Le Qi, Yu Zhang, Ting Liu
We describe a method of calculating the similarity of questions in community QA.
no code implementations • CONLL 2017 • Wanxiang Che, Jiang Guo, Yuxuan Wang, Bo Zheng, Huaipeng Zhao, Yang Liu, Dechuan Teng, Ting Liu
Our system includes three pipelined components: \textit{tokenization}, \textit{Part-of-Speech} (POS) \textit{tagging} and \textit{dependency parsing}.
no code implementations • EMNLP 2017 • Qingyu Yin, Yu Zhang, Wei-Nan Zhang, Ting Liu
Existing approaches for Chinese zero pronoun resolution typically utilize only syntactical and lexical features while ignoring semantic information.
1 code implementation • EMNLP 2017 • Shaolei Wang, Wanxiang Che, Yue Zhang, Meishan Zhang, Ting Liu
In this paper, we model the problem of disfluency detection using a transition-based framework, which incrementally constructs and labels the disfluency chunk of input sentences using a new transition system without syntax information.
1 code implementation • LREC 2018 • Yiming Cui, Ting Liu, Zhipeng Chen, Wentao Ma, Shijin Wang, Guoping Hu
Machine Reading Comprehension (MRC) has become enormously popular recently and has attracted a lot of attention.
2 code implementations • 29 Sep 2017 • Wei-Nan Zhang, Zhigang Chen, Wanxiang Che, Guoping Hu, Ting Liu
In this paper, we introduce the first evaluation of Chinese human-computer dialogue technology.
3 code implementations • CVPR 2018 • Phuc Nguyen, Ting Liu, Gautam Prasad, Bohyung Han
We propose a weakly supervised temporal action localization algorithm on untrimmed videos using convolutional neural networks.
Ranked #13 on Weakly Supervised Action Localization on ActivityNet-1.3 (mAP@0.5 metric)
no code implementations • 6 Jan 2018 • Li Wang, Ting Liu, Bing Wang, Xulei Yang, Gang Wang
First, we learn RNN parameters to discriminate between the target object and background in the first frame of a test sequence.
no code implementations • 15 Mar 2018 • Zhipeng Chen, Yiming Cui, Wentao Ma, Shijin Wang, Ting Liu, Guoping Hu
This paper describes the system which got the state-of-the-art results at SemEval-2018 Task 11: Machine Comprehension using Commonsense Knowledge.
no code implementations • 20 Apr 2018 • Zhaowei Zhu, Ting Liu, Shengda Jin, Xiliang Luo
An effective task offloading strategy is needed to utilize the computational resources efficiently.
no code implementations • ACL 2018 • Yibo Sun, Duyu Tang, Nan Duan, Jianshu ji, Guihong Cao, Xiaocheng Feng, Bing Qin, Ting Liu, Ming Zhou
We present a generative model to map natural language questions into SQL queries.
Ranked #4 on Code Generation on WikiSQL
1 code implementation • 14 May 2018 • Zhongyang Li, Xiao Ding, Ting Liu
Script event prediction requires a model to predict the subsequent event given an existing event context.
1 code implementation • ACL 2018 • Yijia Liu, Wanxiang Che, Huaipeng Zhao, Bing Qin, Ting Liu
Many natural language processing tasks can be modeled into structured prediction and solved as a search problem.
no code implementations • NAACL 2018 • Junwen Duan, Xiao Ding, Ting Liu
To address above issues, we propose a reinforcement learning based approach, which automatically induces target-specific sentence representations over tree structures.
1 code implementation • ACL 2018 • Qingyu Yin, Yu Zhang, Wei-Nan Zhang, Ting Liu, William Yang Wang
In this study, we show how to integrate local and global decision-making by exploiting deep reinforcement learning models.
no code implementations • COLING 2018 • Haoyang Wen, Yijia Liu, Wanxiang Che, Libo Qin, Ting Liu
Classic pipeline models for task-oriented dialogue system require explicit modeling the dialogue states and hand-crafted action spaces to query a domain-specific knowledge base.
Ranked #7 on Task-Oriented Dialogue Systems on KVRET
no code implementations • WS 2018 • Ruiji Fu, Zhengqi Pei, Jiefu Gong, Wei Song, Dechuan Teng, Wanxiang Che, Shijin Wang, Guoping Hu, Ting Liu
This paper describes our system at NLPTEA-2018 Task {\#}1: Chinese Grammatical Error Diagnosis.
1 code implementation • COLING 2018 • Yutai Hou, Yijia Liu, Wanxiang Che, Ting Liu
In this paper, we study the problem of data augmentation for language understanding in task-oriented dialogue system.
1 code implementation • CONLL 2018 • Wanxiang Che, Yijia Liu, Yuxuan Wang, Bo Zheng, Ting Liu
This paper describes our system (HIT-SCIR) submitted to the CoNLL 2018 shared task on Multilingual Parsing from Raw Text to Universal Dependencies.
Ranked #3 on Dependency Parsing on Universal Dependencies
1 code implementation • COLING 2018 • Qingyu Yin, Yu Zhang, Wei-Nan Zhang, Ting Liu, William Yang Wang
Recent neural network methods for zero pronoun resolution explore multiple models for generating representation vectors for zero pronouns and their candidate antecedents.
no code implementations • COLING 2018 • Zhongyang Li, Xiao Ding, Ting Liu
In this paper, we propose using adversarial training augmented Seq2Seq model to generate reasonable and diversified story endings given a story context.
1 code implementation • COLING 2018 • Junwen Duan, Yue Zhang, Xiao Ding, Ching-Yun Chang, Ting Liu
The model uses a target-sensitive representation of the news abstract to weigh sentences in the news content, so as to select and combine the most informative sentences for market modeling.
no code implementations • COLING 2018 • Wei-Nan Zhang, Yiming Cui, Yifa Wang, Qingfu Zhu, Lingzhi Li, Lianqiang Zhou, Ting Liu
Despite the success of existing works on single-turn conversation generation, taking the coherence in consideration, human conversing is actually a context-sensitive process.
no code implementations • 11 Sep 2018 • Chengyao Qian, Ting Liu, Hao Jiang, Zhe Wang, Pengfei Wang, Mingxin Guan, Biao Sun
This report summarises our method and validation results for the ISIC Challenge 2018 - Skin Lesion Analysis Towards Melanoma Detection - Task 1: Lesion Segmentation.
no code implementations • ACL 2019 • Qingfu Zhu, Lei Cui, Wei-Nan Zhang, Furu Wei, Ting Liu
Dialogue systems are usually built on either generation-based or retrieval-based approaches, yet they do not benefit from the advantages of different models.
2 code implementations • 17 Sep 2018 • Tao Ruan, Ting Liu, Zilong Huang, Yunchao Wei, Shikui Wei, Yao Zhao, Thomas Huang
Human parsing has received considerable interest due to its wide application potentials.
Ranked #2 on Person Re-Identification on Market-1501-C
no code implementations • EMNLP 2018 • Xinwei Geng, Xiaocheng Feng, Bing Qin, Ting Liu
Although end-to-end neural machine translation (NMT) has achieved remarkable progress in the recent years, the idea of adopting multi-pass decoding mechanism into conventional NMT is not well explored.
no code implementations • EMNLP 2018 • Lizhen Liu, Xiao Hu, Wei Song, Ruiji Fu, Ting Liu, Guoping Hu
Simile is a special type of metaphor, where comparators such as like and as are used to compare two objects.
no code implementations • EMNLP 2018 • Zexuan Zhong, Jiaqi Guo, Wei Yang, Jian Peng, Tao Xie, Jian-Guang Lou, Ting Liu, Dongmei Zhang
Recent research proposes syntax-based approaches to address the problem of generating programs from natural language specifications.
1 code implementation • EMNLP 2018 • Yijia Liu, Wanxiang Che, Bo Zheng, Bing Qin, Ting Liu
In this paper, we propose a new rich resource enhanced AMR aligner which produces multiple alignments and a new transition system for AMR parsing along with its oracle parser.
Ranked #2 on AMR Parsing on LDC2014T12:
1 code implementation • IJCNLP 2019 • Yiming Cui, Ting Liu, Wanxiang Che, Li Xiao, Zhipeng Chen, Wentao Ma, Shijin Wang, Guoping Hu
Machine Reading Comprehension (MRC) has become enormously popular recently and has attracted a lot of attention.
1 code implementation • 14 Dec 2018 • Sendong Zhao, Ting Liu, Sicheng Zhao, Fei Wang
State-of-the-art studies have demonstrated the superiority of joint modelling over pipeline implementation for medical named entity recognition and normalization due to the mutual benefits between the two processes.
no code implementations • 26 Dec 2018 • Xinwei Geng, Long-Yue Wang, Xing Wang, Bing Qin, Ting Liu, Zhaopeng Tu
Neural machine translation (NMT) models generally adopt an encoder-decoder architecture for modeling the entire translation process.
no code implementations • 13 Feb 2019 • Tien-Ju Yang, Maxwell D. Collins, Yukun Zhu, Jyh-Jing Hwang, Ting Liu, Xiao Zhang, Vivienne Sze, George Papandreou, Liang-Chieh Chen
We present a single-shot, bottom-up approach for whole image parsing.
Ranked #32 on Panoptic Segmentation on Cityscapes val
1 code implementation • journal 2019 • Bing Shuai, Henghui Ding, Ting Liu, Gang Wang, Xudong Jiang
Furthermore, we introduce a “dense skip” architecture to retain a rich set of low-level information from the pre-trained CNN, which is essential to improve the low-level parsing performance.
no code implementations • 8 Mar 2019 • Tianwen Jiang, Ming Liu, Bing Qin, Ting Liu
This paper investigates an attention-based automatic paradigm called TransATT for attribute acquisition, by learning the representation of hierarchical classes and attributes in Chinese ontology.
no code implementations • 8 Mar 2019 • Tianwen Jiang, Sendong Zhao, Jing Liu, Jin-Ge Yao, Ming Liu, Bing Qin, Ting Liu, Chin-Yew Lin
Time-DS is composed of a time series instance-popularity and two strategies.
1 code implementation • 17 May 2019 • Zhongyang Li, Xiao Ding, Ting Liu
In this study, we investigate a transferable BERT (TransBERT) training framework, which can transfer not only general language knowledge from large-scale unlabeled data but also specific kinds of knowledge from various semantically related supervised tasks, for a target task.
5 code implementations • ACL 2019 • Jiaqi Guo, Zecheng Zhan, Yan Gao, Yan Xiao, Jian-Guang Lou, Ting Liu, Dongmei Zhang
We present a neural approach called IRNet for complex and cross-domain Text-to-SQL.
1 code implementation • 29 May 2019 • Haoyu Song, Wei-Nan Zhang, Yiming Cui, Dong Wang, Ting Liu
Giving conversational context with persona information to a chatbot, how to exploit the information to generate diverse and sustainable conversations is still a non-trivial task.
no code implementations • ACL 2019 • Haichao Zhu, Li Dong, Furu Wei, Wenhui Wang, Bing Qin, Ting Liu
We also present a way to construct training data for our question generation models by leveraging the existing reading comprehension dataset.
2 code implementations • 19 Jun 2019 • Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang
To demonstrate the effectiveness of these models, we create a series of Chinese pre-trained language models as our baselines, including BERT, RoBERTa, ELECTRA, RBT, etc.
no code implementations • 20 Jun 2019 • Yutai Hou, Zhihan Zhou, Yijia Liu, Ning Wang, Wanxiang Che, Han Liu, Ting Liu
It calculates emission score with similarity based methods and obtains transition score with a specially designed transfer mechanism.
no code implementations • 26 Jun 2019 • Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh V. Chawla, Meng Jiang
Conditions are essential in the statements of biological literature.
no code implementations • 18 Jul 2019 • Xiao Ding, Zhongyang Li, Ting Liu, Kuo Liao
The evolution and development of events have their own basic principles, which make events happen sequentially.
no code implementations • 15 Aug 2019 • Shaolei Wang, Wanxiang Che, Qi Liu, Pengda Qin, Ting Liu, William Yang Wang
The pre-trained network is then fine-tuned using human-annotated disfluency detection training data.
1 code implementation • IJCNLP 2019 • Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu
In this paper, we propose Cross-Lingual Machine Reading Comprehension (CLMRC) task for the languages other than English.
2 code implementations • IJCNLP 2019 • Libo Qin, Wanxiang Che, Yangming Li, Haoyang Wen, Ting Liu
In our framework, we adopt a joint model with Stack-Propagation which can directly use the intent information as input for slot filling, thus to capture the intent semantic knowledge.
Ranked #2 on Intent Detection on SNIPS
1 code implementation • IJCNLP 2019 • Heng Gong, Xiaocheng Feng, Bing Qin, Ting Liu
To address aforementioned problems, not only do we model each table cell considering other records in the same row, we also enrich table's representation by modeling each table cell in context of other cells in the same column or with historical (time dimension) data respectively.
1 code implementation • IJCNLP 2019 • Xiao Ding, Kuo Liao, Ting Liu, Zhongyang Li, Junwen Duan
Prior work has proposed effective methods to learn event representations that can capture syntactic and semantic information over text corpus, demonstrating their effectiveness for downstream tasks such as script event prediction.
1 code implementation • 10 Sep 2019 • Yutai Hou, Meng Fang, Wanxiang Che, Ting Liu
The framework builds a user simulator by first generating diverse dialogue data from templates and then build a new State2Seq user simulator on the data.
1 code implementation • IJCNLP 2019 • Libo Qin, Yijia Liu, Wanxiang Che, Haoyang Wen, Yangming Li, Ting Liu
Querying the knowledge base (KB) has long been a challenge in the end-to-end task-oriented dialogue system.
Ranked #6 on Task-Oriented Dialogue Systems on KVRET
1 code implementation • IJCNLP 2019 • Yuxuan Wang, Wanxiang Che, Jiang Guo, Yijia Liu, Ting Liu
In this approach, a linear transformation is learned from contextual word alignments to align the contextualized embeddings independently trained in different languages.
no code implementations • IJCNLP 2019 • Li Du, Xiao Ding, Ting Liu, Zhongyang Li
Understanding event and event-centered commonsense reasoning are crucial for natural language processing (NLP).
no code implementations • CONLL 2019 • Wentao Ma, Yiming Cui, Nan Shao, Su He, Wei-Nan Zhang, Ting Liu, Shijin Wang, Guoping Hu
The heart of TripleNet is a novel attention mechanism named triple attention to model the relationships within the triple at four levels.
2 code implementations • 10 Oct 2019 • Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen
The semantic segmentation branch is the same as the typical design of any semantic segmentation model (e. g., DeepLab), while the instance segmentation branch is class-agnostic, involving a simple instance center regression.
no code implementations • CONLL 2019 • Wanxiang Che, Longxu Dou, Yang Xu, Yuxuan Wang, Yijia Liu, Ting Liu
This paper describes our system (HIT-SCIR) for CoNLL 2019 shared task: Cross-Framework Meaning Representation Parsing.
Ranked #1 on UCCA Parsing on CoNLL 2019
no code implementations • IJCNLP 2019 • Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh Chawla, Meng Jiang
In this work, we propose a new sequence labeling framework (as well as a new tag schema) to jointly extract the fact and condition tuples from statement sentences.
no code implementations • IJCNLP 2019 • Ziyue Wang, Baoxin Wang, Xingyi Duan, Dayong Wu, Shijin Wang, Guoping Hu, Ting Liu
To our knowledge, IFlyLegal is the first Chinese legal system that employs up-to-date NLP techniques and caters for needs of different user groups, such as lawyers, judges, procurators, and clients.
no code implementations • 8 Nov 2019 • Haichao Zhu, Li Dong, Furu Wei, Bing Qin, Ting Liu
The limited size of existing query-focused summarization datasets renders training data-driven summarization models challenging.
no code implementations • 8 Nov 2019 • Jiaqi Li, Ming Liu, Bing Qin, Zihao Zheng, Ting Liu
In this paper, we propose the scheme for annotating large-scale multi-party chat dialogues for discourse parsing and machine comprehension.
no code implementations • 9 Nov 2019 • Ziqing Yang, Yiming Cui, Wanxiang Che, Ting Liu, Shijin Wang, Guoping Hu
With virtual adversarial training (VAT), we explore the possibility of improving the RC models with semi-supervised learning and prove that examples from a different task are also beneficial.
1 code implementation • 14 Nov 2019 • Haoyu Song, Wei-Nan Zhang, Jingwen Hu, Ting Liu
Consistency is one of the major challenges faced by dialogue agents.
no code implementations • 14 Nov 2019 • Yiming Cui, Wei-Nan Zhang, Wanxiang Che, Ting Liu, Zhipeng Chen, Shijin Wang, Guoping Hu
Recurrent Neural Networks (RNN) are known as powerful models for handling sequential data, and especially widely utilized in various natural language processing tasks.
9 code implementations • CVPR 2020 • Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen
In this work, we introduce Panoptic-DeepLab, a simple, strong, and fast system for panoptic segmentation, aiming to establish a solid baseline for bottom-up methods that can achieve comparable performance of two-stage methods while yielding fast inference speed.
Ranked #6 on Panoptic Segmentation on Cityscapes test (using extra training data)
no code implementations • 27 Nov 2019 • Jennifer J. Sun, Ting Liu, Gautam Prasad
Towards a better understanding of viewer impact, we present our methods for the MediaEval 2018 Emotional Impact of Movies Task to predict the expected valence and arousal continuously in movies.
2 code implementations • ECCV 2020 • Jennifer J. Sun, Jiaping Zhao, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Ting Liu
Depictions of similar human body configurations can vary with changing viewpoints.
Ranked #1 on Pose Retrieval on MPI-INF-3DHP
no code implementations • 19 Dec 2019 • Yiming Cui, Wanxiang Che, Wei-Nan Zhang, Ting Liu, Shijin Wang, Guoping Hu
Story Ending Prediction is a task that needs to select an appropriate ending for the given story, which requires the machine to understand the story and sometimes needs commonsense knowledge.
no code implementations • 19 Dec 2019 • Xingyi Duan, Baoxin Wang, Ziyue Wang, Wentao Ma, Yiming Cui, Dayong Wu, Shijin Wang, Ting Liu, Tianxiang Huo, Zhen Hu, Heng Wang, Zhiyuan Liu
We present a Chinese judicial reading comprehension (CJRC) dataset which contains approximately 10K documents and almost 50K questions with answers.
1 code implementation • 15 Jan 2020 • Jennifer J. Sun, Ting Liu, Alan S. Cowen, Florian Schroff, Hartwig Adam, Gautam Prasad
The ability to predict evoked affect from a video, before viewers watch the video, can help in content creation and video recommendation.
8 code implementations • Findings of the Association for Computational Linguistics 2020 • Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, Ming Zhou
Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation tasks.
Ranked #1 on Code Documentation Generation on CodeSearchNet - Go
1 code implementation • 24 Feb 2020 • Xiaocheng Feng, Yawei Sun, Bing Qin, Heng Gong, Yibo Sun, Wei Bi, Xiaojiang Liu, Ting Liu
In this paper, we focus on a new practical task, document-scale text content manipulation, which is the opposite of text style transfer and aims to preserve text styles while altering the content.
1 code implementation • ACL 2020 • Ziqing Yang, Yiming Cui, Zhipeng Chen, Wanxiang Che, Ting Liu, Shijin Wang, Guoping Hu
In this paper, we introduce TextBrewer, an open-source knowledge distillation toolkit designed for natural language processing.
1 code implementation • COLING 2020 • Yiming Cui, Ting Liu, Ziqing Yang, Zhipeng Chen, Wentao Ma, Wanxiang Che, Shijin Wang, Guoping Hu
To add diversity in this area, in this paper, we propose a new task called Sentence Cloze-style Machine Reading Comprehension (SC-MRC).
no code implementations • EMNLP 2020 • Nan Shao, Yiming Cui, Ting Liu, Shijin Wang, Guoping Hu
We construct a strong baseline model to establish that, with the proper use of pre-trained models, graph structure may not be necessary for multi-hop question answering.
1 code implementation • COLING 2020 • Jiaqi Li, Ming Liu, Min-Yen Kan, Zihao Zheng, Zekun Wang, Wenqiang Lei, Ting Liu, Bing Qin
Research into the area of multiparty dialog has grown considerably over recent years.
Ranked #7 on Discourse Parsing on Molweni
no code implementations • ACL 2020 • Haoyu Song, Yan Wang, Wei-Nan Zhang, Xiaojiang Liu, Ting Liu
Maintaining a consistent personality in conversations is quite natural for human beings, but is still a non-trivial task for machines.
no code implementations • 17 Apr 2020 • Longxuan Ma, Wei-Nan Zhang, Mingda Li, Ting Liu
We believe that extracting unstructured document(s) information is the future trend of the DS because a great amount of human knowledge lies in these document(s).
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Libo Qin, Xiao Xu, Wanxiang Che, Ting Liu
Such an interaction layer is applied to each token adaptively, which has the advantage to automatically extract the relevant intents information, making a fine-grained intent information integration for the token-level slot prediction.
1 code implementation • ACL 2020 • Libo Qin, Xiao Xu, Wanxiang Che, Yue Zhang, Ting Liu
However, there has been relatively little research on how to effectively use data from all domains to improve the performance of each domain and also unseen domains.
Ranked #1 on Task-Oriented Dialogue Systems on Kvret
1 code implementation • EMNLP 2020 • Sanyuan Chen, Yutai Hou, Yiming Cui, Wanxiang Che, Ting Liu, Xiangzhan Yu
Deep pretrained language models have achieved great success in the way of pretraining first and then fine-tuning.
1 code implementation • ACL 2020 • Wentao Ma, Yiming Cui, Ting Liu, Dong Wang, Shijin Wang, Guoping Hu
Human conversations contain many types of information, e. g., knowledge, common sense, and language habits.
no code implementations • 29 Apr 2020 • Qingfu Zhu, Wei-Nan Zhang, Ting Liu, William Yang Wang
Open-domain dialogue generation suffers from the data insufficiency problem due to the vast size of potential responses.
1 code implementation • Findings (ACL) 2021 • Chenglei Si, Ziqing Yang, Yiming Cui, Wentao Ma, Ting Liu, Shijin Wang
To fill this important gap, we construct AdvRACE (Adversarial RACE), a new model-agnostic benchmark for evaluating the robustness of MRC models under four different types of adversarial attacks, including our novel distractor extraction and generation attacks.
6 code implementations • Findings of the Association for Computational Linguistics 2020 • Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu
Bidirectional Encoder Representations from Transformers (BERT) has shown marvelous improvements across various NLP tasks, and consecutive variants have been proposed to further improve the performance of the pre-trained language models.
Ranked #13 on Stock Market Prediction on Astock
no code implementations • 30 Apr 2020 • Libo Qin, Minheng Ni, Yue Zhang, Wanxiang Che, Yangming Li, Ting Liu
Spoken language understanding has been addressed as a supervised learning problem, where a set of training data is available for each domain.
1 code implementation • ACL 2020 • Xinwei Geng, Long-Yue Wang, Xing Wang, Bing Qin, Ting Liu, Zhaopeng Tu
Self-attention networks (SANs) with selective mechanism has produced substantial improvements in various NLP tasks by concentrating on a subset of input words.
2 code implementations • ACL 2020 • Zeming Liu, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che, Ting Liu
We propose a new task of conversational recommendation over multi-type dialogs, where the bots can proactively and naturally lead a conversation from a non-recommendation dialog (e. g., QA) to a recommendation dialog, taking into account user's interests and feedback.
1 code implementation • ACL 2020 • Bo Zheng, Haoyang Wen, Yaobo Liang, Nan Duan, Wanxiang Che, Daxin Jiang, Ming Zhou, Ting Liu
Natural Questions is a new challenging machine reading comprehension benchmark with two-grained answers, which are a long answer (typically a paragraph) and a short answer (one or more entities inside the long answer).
2 code implementations • ACL 2020 • Yutai Hou, Wanxiang Che, Yongkui Lai, Zhihan Zhou, Yijia Liu, Han Liu, Ting Liu
In this paper, we explore the slot tagging with only a few labeled support sentences (a. k. a.
no code implementations • 12 Jun 2020 • Suncheng Xiang, Yuzhuo Fu, Guanjie You, Ting Liu
To address this problem, firstly, we develop a large-scale synthetic data engine, the salient characteristic of this engine is controllable.
no code implementations • 17 Jun 2020 • Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh V. Chawla, Meng Jiang
Noun phrases and relational phrases in Open Knowledge Bases are often not canonical, leading to redundant and ambiguous facts.
1 code implementation • 26 Jun 2020 • Kaidi Jin, Tianwei Zhang, Chao Shen, Yufei Chen, Ming Fan, Chenhao Lin, Ting Liu
It is unknown whether there are any connections and common characteristics between the defenses against these two attacks.
no code implementations • ACL 2020 • Yangming Li, Kaisheng Yao, Libo Qin, Wanxiang Che, Xiaolong Li, Ting Liu
Data-driven approaches using neural networks have achieved promising performances in natural language generation (NLG).
no code implementations • ACL 2020 • Jun Xu, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che, Ting Liu
To address the challenge of policy learning in open-domain multi-turn conversation, we propose to represent prior information about dialog transitions as a graph and learn a graph grounded dialog policy, aimed at fostering a more coherent and controllable dialog.
no code implementations • 13 Aug 2020 • Ming Fan, Wenying Wei, Xiaofei Xie, Yang Liu, Xiaohong Guan, Ting Liu
For this reason, a variety of explanation approaches are proposed to interpret predictions by providing important features.
Cryptography and Security Software Engineering
no code implementations • 16 Aug 2020 • Libo Qin, Wanxiang Che, Yangming Li, Minheng Ni, Ting Liu
In dialog system, dialog act recognition and sentiment classification are two correlative tasks to capture speakers intentions, where dialog act and sentiment can indicate the explicit and the implicit intentions separately.
3 code implementations • 17 Sep 2020 • Yutai Hou, Jiafeng Mao, Yongkui Lai, Cheng Chen, Wanxiang Che, Zhigang Chen, Ting Liu
In this paper, we present FewJoint, a novel Few-Shot Learning benchmark for NLP.
1 code implementation • EMNLP 2020 • Haoyu Song, Yan Wang, Wei-Nan Zhang, Zhengyu Zhao, Ting Liu, Xiaojiang Liu
Maintaining a consistent attribute profile is crucial for dialogue agents to naturally converse with humans.
1 code implementation • EMNLP (ACL) 2021 • Wanxiang Che, Yunlong Feng, Libo Qin, Ting Liu
We introduce \texttt{N-LTP}, an open-source neural language technology platform supporting six fundamental Chinese NLP tasks: {lexical analysis} (Chinese word segmentation, part-of-speech tagging, and named entity recognition), {syntactic parsing} (dependency parsing), and {semantic parsing} (semantic dependency parsing and semantic role labeling).
no code implementations • 1 Oct 2020 • Shaolei Wang, Baoxin Wang, Jiefu Gong, Zhongyuan Wang, Xiao Hu, Xingyi Duan, Zizhuo Shen, Gang Yue, Ruiji Fu, Dayong Wu, Wanxiang Che, Shijin Wang, Guoping Hu, Ting Liu
Grammatical error diagnosis is an important task in natural language processing.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Longxuan Ma, Wei-Nan Zhang, Runxin Sun, Ting Liu
Unstructured documents serving as external knowledge of the dialogues help to generate more informative responses.
1 code implementation • 8 Oct 2020 • Libo Qin, Tailu Liu, Wanxiang Che, Bingbing Kang, Sendong Zhao, Ting Liu
Instead of adopting the self-attention mechanism in vanilla Transformer, we propose a co-interactive module to consider the cross-impact by building a bidirectional connection between the two related tasks.
1 code implementation • 8 Oct 2020 • Dechuan Teng, Libo Qin, Wanxiang Che, Sendong Zhao, Ting Liu
In this paper, we improve Chinese spoken language understanding (SLU) by injecting word information.
no code implementations • 11 Oct 2020 • Yutai Hou, Yongkui Lai, Yushan Wu, Wanxiang Che, Ting Liu
In this paper, we study the few-shot multi-label classification for user intent detection.
1 code implementation • NeurIPS 2020 • Long Zhao, Ting Liu, Xi Peng, Dimitris Metaxas
In this paper, we propose a novel and effective regularization term for adversarial data augmentation.
no code implementations • 15 Oct 2020 • Suncheng Xiang, Yuzhuo Fu, Guanjie You, Ting Liu
Person re-identification (re-ID) plays an important role in applications such as public security and video surveillance.
no code implementations • 17 Oct 2020 • Yunchao Wei, Shuai Zheng, Ming-Ming Cheng, Hang Zhao, LiWei Wang, Errui Ding, Yi Yang, Antonio Torralba, Ting Liu, Guolei Sun, Wenguan Wang, Luc van Gool, Wonho Bae, Junhyug Noh, Jinhwan Seo, Gunhee Kim, Hao Zhao, Ming Lu, Anbang Yao, Yiwen Guo, Yurong Chen, Li Zhang, Chuangchuang Tan, Tao Ruan, Guanghua Gu, Shikui Wei, Yao Zhao, Mariia Dobko, Ostap Viniavskyi, Oles Dobosevych, Zhendong Wang, Zhenyuan Chen, Chen Gong, Huanqing Yan, Jun He
The purpose of the Learning from Imperfect Data (LID) workshop is to inspire and facilitate the research in developing novel approaches that would harness the imperfect data and improve the data-efficiency during training.
1 code implementation • CCL 2021 • Xiachong Feng, Xiaocheng Feng, Bing Qin, Ting Liu
In detail, we consider utterance and commonsense knowledge as two different types of data and design a Dialogue Heterogeneous Graph Network (D-HGN) for modeling both information.
2 code implementations • 23 Oct 2020 • Ting Liu, Jennifer J. Sun, Long Zhao, Jiaping Zhao, Liangzhe Yuan, Yuxiao Wang, Liang-Chieh Chen, Florian Schroff, Hartwig Adam
Recognition of human poses and actions is crucial for autonomous systems to interact smoothly with people.
1 code implementation • EMNLP 2020 • Shaolei Wang, Zhongyuan Wang, Wanxiang Che, Ting Liu
Most existing approaches to disfluency detection heavily rely on human-annotated corpora, which is expensive to obtain in practice.