no code implementations • RANLP 2021 • Minghuan Tan, Jing Jiang
Understanding idioms is important in NLP.
no code implementations • EMNLP 2020 • Jianfei Yu, Jing Jiang, Ling Min Serena Khoo, Hai Leong Chieu, Rui Xia
The prevalent use of social media enables rapid spread of rumors on a massive scale, which leads to the emerging need of automatic rumor verification (RV).
no code implementations • ACL 2022 • Sicheng Yu, Qianru Sun, Hao Zhang, Jing Jiang
Translate-train is a general training approach to multilingual tasks.
1 code implementation • RANLP 2021 • Minghuan Tan, Jing Jiang
We find that our method substantially outperforms existing methods on the evaluation dataset we have constructed.
1 code implementation • 29 May 2023 • Yijun Yang, Tianyi Zhou, Jing Jiang, Guodong Long, Yuhui Shi
We address it by "Continual Task Allocation via Sparse Prompting (CoTASP)", which learns over-complete dictionaries to produce sparse masks as prompts extracting a sub-network for each task from a meta-policy network.
no code implementations • 27 May 2023 • Rui Cao, Jing Jiang
We propose a modularized zero-shot network that explicitly decomposes questions into sub reasoning steps and is highly interpretable.
no code implementations • 23 May 2023 • Shengchao Chen, Guodong Long, Tao Shen, Tianyi Zhou, Jing Jiang
Federated weather forecasting is a promising collaborative learning framework for analyzing meteorological data across participants from different countries and regions, thus embodying a global-scale real-time weather data predictive analytics platform to tackle climate change.
no code implementations • 9 Apr 2023 • Haiyan Zhao, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
In this paper, we study which modules in neural networks are more prone to forgetting by investigating their training dynamics during CL.
no code implementations • 14 Mar 2023 • Han Zheng, Xufang Luo, Pengfei Wei, Xuan Song, Dongsheng Li, Jing Jiang
In this paper, we consider an offline-to-online setting where the agent is first learned from the offline dataset and then trained online, and propose a framework called Adaptive Policy Learning for effectively taking advantage of offline and online data.
no code implementations • 8 Feb 2023 • Rui Cao, Roy Ka-Wei Lee, Wen-Haw Chong, Jing Jiang
Specifically, we construct simple prompts and provide a few in-context examples to exploit the implicit knowledge in the pre-trained RoBERTa language model for hateful meme classification.
no code implementations • 27 Jan 2023 • Haiyan Zhao, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
To address these challenges, we create a small model for a new task from the pruned models of similar tasks.
1 code implementation • 22 Jan 2023 • Shengchao Chen, Guodong Long, Tao Shen, Jing Jiang
To relieve the data exposure concern across regions, a novel federated learning approach has been proposed to collaboratively learn a brand-new spatio-temporal Transformer-based foundation model across participants with heterogeneous meteorological data.
no code implementations • 12 Dec 2022 • Yachao Li, Junhui Li, Jing Jiang, Shimin Tao, Hao Yang, Min Zhang
To alleviate this problem, we propose a position-aware Transformer (P-Transformer) to enhance both the absolute and relative position information in both self-attention and cross-attention.
1 code implementation • 23 Nov 2022 • Yue Tan, Yixin Liu, Guodong Long, Jing Jiang, Qinghua Lu, Chengqi Zhang
Inspired by this, we propose FedStar, an FGL framework that extracts and shares the common underlying structure information for inter-graph federated learning tasks.
no code implementations • 11 Nov 2022 • Yang Li, Canran Xu, Tao Shen, Jing Jiang, Guodong Long
The sharing task description is unable to stimulate the unique task-related information in each training sample, especially for tasks with the finite-label space.
no code implementations • COLING 2022 • Cunxiao Du, Zhaopeng Tu, Longyue Wang, Jing Jiang
Recently, a new training oaxe loss has proven effective to ameliorate the effect of multimodality for non-autoregressive translation (NAT), which removes the penalty of word order errors in the standard cross-entropy loss.
no code implementations • 21 Sep 2022 • Yue Tan, Guodong Long, Jie Ma, Lu Liu, Tianyi Zhou, Jing Jiang
To prevent these issues from hindering the deployment of FL systems, we propose a lightweight framework where clients jointly learn to fuse the representations generated by multiple fixed pre-trained models rather than training a large-scale model from scratch.
1 code implementation • 3 Sep 2022 • Yunshi Lan, Lei Wang, Jing Jiang, Ee-Peng Lim
To improve the compositional generalization in MWP solving, we propose an iterative data augmentation method that includes diverse compositional variation into training data and could collaborate with MWP methods.
no code implementations • 17 Aug 2022 • Jing Jiang, Weihong Deng
Combining identity and pose feature, a neutral face of input individual should be generated by the decoder.
1 code implementation • 15 Aug 2022 • Pengfei Wei, Lingdong Kong, Xinghua Qu, Xiang Yin, Zhiqiang Xu, Jing Jiang, Zejun Ma
Specifically, we consider the generation of cross-domain videos from two sets of latent factors, one encoding the static domain-related information and another encoding the temporal and semantic-related information.
no code implementations • 28 May 2022 • Jing Jiang, Weihong Deng
On the one hand, PT introduces semi-supervised learning method to relieve the shortage of data in FER.
no code implementations • 20 May 2022 • Zhuowei Wang, Tianyi Zhou, Guodong Long, Bo Han, Jing Jiang
Federated learning (FL) aims at training a global model on the server side while the training data are collected and located at the local devices.
1 code implementation • ACL 2022 • Xiaosen Zheng, Jing Jiang
We empirically show that our memorization attribution method is faithful, and share our interesting finding that the top-memorized parts of a training instance tend to be features negatively correlated with the class label.
1 code implementation • 2 Mar 2022 • Fengwen Chen, Guodong Long, Zonghan Wu, Tianyi Zhou, Jing Jiang
We propose a novel structured federated learning (SFL) framework to learn both the global and personalized models simultaneously using client-wise relation graphs and clients' private data.
1 code implementation • ACL 2022 • Minghuan Tan, Yong Dai, Duyu Tang, Zhangyin Feng, Guoping Huang, Jing Jiang, Jiwei Li, Shuming Shi
We find that a frozen GPT achieves state-of-the-art performance on perfect pinyin.
no code implementations • 21 Feb 2022 • Weiqi Hua, Ying Chen, Meysam Qadrdan, Jing Jiang, Hongjian Sun, Jianzhong Wu
The blockchain and artificial intelligence (AI) are innovative technologies to fulfil these two factors, by which the blockchain provides decentralised trading platforms for energy markets and the AI supports the optimal operational control of power systems.
1 code implementation • 13 Feb 2022 • Jie Ma, Guodong Long, Tianyi Zhou, Jing Jiang, Chengqi Zhang
Knowledge sharing and model personalization are essential components to tackle the non-IID challenge in federated learning (FL).
1 code implementation • NeurIPS 2021 • Shuang Ao, Tianyi Zhou, Guodong Long, Qinghua Lu, Liming Zhu, Jing Jiang
Next, a bottom-up traversal of the tree trains the RL agent from easier sub-tasks with denser rewards on bottom layers to harder ones on top layers and collects its cost on each sub-task train the planner in the next episode.
1 code implementation • 24 Nov 2021 • Zhining Liu, Pengfei Wei, Zhepei Wei, Boyang Yu, Jing Jiang, Wei Cao, Jiang Bian, Yi Chang
Class-imbalance is a common problem in machine learning practice.
1 code implementation • 24 Oct 2021 • Zhihong Deng, Zuyue Fu, Lingxiao Wang, Zhuoran Yang, Chenjia Bai, Zhaoran Wang, Jing Jiang
Offline reinforcement learning (RL) aims to learn the optimal policy from a pre-collected dataset without online interactions.
no code implementations • ICLR 2022 • Yijun Yang, Jing Jiang, Tianyi Zhou, Jie Ma, Yuhui Shi
Model-based offline RL instead trains an environment model using a dataset of pre-collected experiences so online RL methods can learn in an offline manner by solely interacting with the model.
no code implementations • 29 Sep 2021 • Han Zheng, Jing Jiang, Pengfei Wei, Guodong Long, Xuan Song, Chengqi Zhang
URPL adds an uncertainty regularization term in the policy learning objective to enforce to learn a more stable policy under the offline setting.
no code implementations • 29 Sep 2021 • Haiyan Zhao, Tianyi Zhou, Guodong Long, Jing Jiang, Liming Zhu, Chengqi Zhang
Can we find a better initialization for a new task, e. g., a much smaller network closer to the final pruned model, by exploiting its similar tasks?
no code implementations • 29 Sep 2021 • Minglei You, Qian Wang, Hongjian Sun, Ivan Castro, Jing Jiang
By constructing digital twins (DT) of an integrated energy system (IES), one can benefit from DT's predictive capabilities to improve coordinations among various energy converters, hence enhancing energy efficiency, cost savings and carbon emission reduction.
no code implementations • 29 Sep 2021 • Shuang Ao, Tianyi Zhou, Jing Jiang, Guodong Long, Xuan Song, Chengqi Zhang
They are complementary in acquiring more informative feedback for RL: the planning policy provides dense reward of finishing easier sub-tasks while the environment policy modifies these sub-tasks to be adequately challenging and diverse so the RL agent can quickly adapt to different tasks/environments.
no code implementations • 29 Sep 2021 • Han Zheng, Xufang Luo, Pengfei Wei, Xuan Song, Dongsheng Li, Jing Jiang
Specifically, we explicitly consider the difference between the online and offline data and apply an adaptive update scheme accordingly, i. e., a pessimistic update strategy for the offline dataset and a greedy or no pessimistic update scheme for the online dataset.
1 code implementation • Findings (EMNLP) 2021 • Qiyuan Zhang, Lei Wang, Sicheng Yu, Shuohang Wang, Yang Wang, Jing Jiang, Ee-Peng Lim
While diverse question answering (QA) datasets have been proposed and contributed significantly to the development of deep learning models for QA tasks, the existing datasets fall short in two aspects.
no code implementations • Findings (NAACL) 2022 • Yang Li, Guodong Long, Tao Shen, Jing Jiang
It consists of (1) a pairwise type-enriched sentence encoding module injecting both context-free and -related backgrounds to alleviate sentence-level wrong labeling, and (2) a hierarchical type-sentence alignment module enriching a sentence with the triple fact's basic attributes to support long-tail relations.
1 code implementation • 7 Sep 2021 • Xueping Peng, Guodong Long, Tao Shen, Sen Wang, Jing Jiang
Sequential diagnosis prediction on the Electronic Health Record (EHR) has been proven crucial for predictive analytics in the medical domain.
no code implementations • 24 Aug 2021 • Guodong Long, Tao Shen, Yue Tan, Leah Gerrard, Allison Clarke, Jing Jiang
Implementing an open innovation framework in the healthcare industry, namely open health, is to enhance innovation and creative capability of health-related organisations by building a next-generation collaborative framework with partner organisations and the research community.
no code implementations • 24 Aug 2021 • Guodong Long, Yue Tan, Jing Jiang, Chengqi Zhang
In the near future, it is foreseeable to have decentralized data ownership in the finance sector using federated learning.
1 code implementation • 19 Aug 2021 • Guodong Long, Ming Xie, Tao Shen, Tianyi Zhou, Xianzhi Wang, Jing Jiang, Chengqi Zhang
By comparison, a mixture of multiple global models could capture the heterogeneity across various clients if assigning the client to different global models (i. e., centers) in FL.
1 code implementation • 15 Aug 2021 • Yunshi Lan, Gaole He, Jinhao Jiang, Jing Jiang, Wayne Xin Zhao, Ji-Rong Wen
Knowledge base question answering (KBQA) aims to answer a question over a knowledge base (KB).
no code implementations • 9 Aug 2021 • Rui Cao, Ziqing Fan, Roy Ka-Wei Lee, Wen-Haw Chong, Jing Jiang
Our experiment results show that DisMultiHate is able to outperform state-of-the-art unimodal and multimodal baselines in the hateful meme classification task.
1 code implementation • ACL 2021 • Sicheng Yu, Hao Zhang, Yulei Niu, Qianru Sun, Jing Jiang
Pre-trained multilingual language models, e. g., multilingual-BERT, are widely used in cross-lingual tasks, yielding the state-of-the-art performance.
1 code implementation • ACL 2021 • Yunshi Lan, Jing Jiang
We propose a novel graph-based model to capture the transitions of focal entities and apply a graph neural network to derive a probability distribution of focal entities for each question, which is then combined with a standard KBQA module to perform answer ranking.
1 code implementation • 20 Jul 2021 • Xueping Peng, Guodong Long, Sen Wang, Jing Jiang, Allison Clarke, Clement Schlegel, Chengqi Zhang
Hence, some recent works train healthcare representations by incorporating medical ontology, by self-supervised tasks like diagnosis prediction, but (1) the small-scale, monotonous ontology is insufficient for robust learning, and (2) critical contexts or dependencies underlying patient journeys are barely exploited to enhance ontology learning.
1 code implementation • 10 Jul 2021 • Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, Chengqi Zhang
Second, the bandwidth of existing graph convolutional filters is fixed.
1 code implementation • 9 Jun 2021 • Cunxiao Du, Zhaopeng Tu, Jing Jiang
We propose a new training objective named order-agnostic cross entropy (OaXE) for fully non-autoregressive translation (NAT) models.
no code implementations • 25 May 2021 • Yunshi Lan, Gaole He, Jinhao Jiang, Jing Jiang, Wayne Xin Zhao, Ji-Rong Wen
In this paper, we elaborately summarize the typical challenges and solutions for complex KBQA.
1 code implementation • 19 May 2021 • Minghuan Tan, Lei Wang, Lingxiao Jiang, Jing Jiang
In this paper, we revisit math word problems~(MWPs) from the cross-lingual and multilingual perspective.
Machine Translation
Pretrained Multilingual Language Models
+1
2 code implementations • 1 May 2021 • Yue Tan, Guodong Long, Lu Liu, Tianyi Zhou, Qinghua Lu, Jing Jiang, Chengqi Zhang
Heterogeneity across clients in federated learning (FL) usually hinders the optimization convergence and generalization performance when the aggregation of clients' knowledge occurs in the gradient space.
no code implementations • EACL 2021 • Xiaoying Ren, Jing Jiang, Ling Min Serena Khoo, Hai Leong Chieu
After deriving a vector representation for each topic, given an instance, we derive a {``}topic mixture{''} vector for the instance based on its topic distribution.
no code implementations • 27 Feb 2021 • Isayiyas Nigatu Tiba, Quan Zhang, Jing Jiang, Yongchao Wang
An alternate direction method of multipliers (ADMM)-based detectors can achieve good performance in both small and large-scale multiple-input multiple-output (MIMO) systems.
no code implementations • ICLR 2021 • Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Xuanyi Dong, Chengqi Zhang
To resolve this problem, we propose Isometric Propagation Network (IPN), which learns to strengthen the relation between classes within each space and align the class dependency in the two spaces.
no code implementations • 24 Jan 2021 • Xiaohan Zhang, Lu Liu, Guodong Long, Jing Jiang, Shenquan Liu
Typical methods to study cognitive function are to record the electrical activities of animal neurons during the training of animals performing behavioral tasks.
no code implementations • ICML Workshop AML 2021 • Jie Wang, Zhaoxia Yin, Jing Jiang, Yang Du
In this paper, we propose an attention-guided black-box adversarial attack based on the large-scale multiobjective evolutionary optimization, termed as LMOA.
no code implementations • 19 Jan 2021 • Jie Wang, Zhaoxia Yin, Jin Tang, Jing Jiang, Bin Luo
The studies on black-box adversarial attacks have become increasingly prevalent due to the intractable acquisition of the structural knowledge of deep neural networks (DNNs).
1 code implementation • 11 Jan 2021 • Gaole He, Yunshi Lan, Jing Jiang, Wayne Xin Zhao, Ji-Rong Wen
In our approach, the student network aims to find the correct answer to the query, while the teacher network tries to learn intermediate supervision signals for improving the reasoning capacity of the student network.
Ranked #2 on
Semantic Parsing
on WebQuestionsSP
no code implementations • 1 Jan 2021 • Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Xuanyi Dong, Chengqi Zhang
Few-shot learning aims to train a classifier given only a few samples per class that are highly insufficient to describe the whole data distribution.
no code implementations • 1 Jan 2021 • Haiyan Zhao, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
In this paper, we introduce an efficient method, \name, to extract the local inference chains by optimizing a differentiable sparse scoring for the filters and layers to preserve the outputs on given data from a local region.
no code implementations • 2 Dec 2020 • Zhuowei Wang, Jing Jiang, Bo Han, Lei Feng, Bo An, Gang Niu, Guodong Long
We also instantiate our framework with different combinations, which set the new state of the art on benchmark-simulated and real-world datasets with noisy labels.
no code implementations • 6 Nov 2020 • Bingcong Li, Bo Han, Zhuowei Wang, Jing Jiang, Guodong Long
Specifically, our method maintains a dynamically updating confusion matrix, which analyzes confusable classes in the dataset.
1 code implementation • COLING 2020 • Minghuan Tan, Jing Jiang
Specifically, we first match the embedding of each candidate idiom with the hidden representation corresponding to the blank in the context.
no code implementations • NeurIPS 2020 • Han Zheng, Pengfei Wei, Jing Jiang, Guodong Long, Qinghua Lu, Chengqi Zhang
Numerous deep reinforcement learning agents have been proposed, and each of them has its strengths and flaws.
2 code implementations • NeurIPS 2020 • Zhining Liu, Pengfei Wei, Jing Jiang, Wei Cao, Jiang Bian, Yi Chang
This makes MESA generally applicable to most of the existing learning models and the meta-sampler can be efficiently applied to new tasks.
1 code implementation • 12 Oct 2020 • Sicheng Yu, Yulei Niu, Shuohang Wang, Jing Jiang, Qianru Sun
We then conduct two novel CVC inference methods (on trained models) to capture the effect of comprehensive reasoning as the final prediction.
no code implementations • COLING 2020 • Hao Huang, Guodong Long, Tao Shen, Jing Jiang, Chengqi Zhang
Many graph embedding approaches have been proposed for knowledge graph completion via link prediction.
2 code implementations • COLING 2020 • Yang Li, Tao Shen, Guodong Long, Jing Jiang, Tianyi Zhou, Chengqi Zhang
Then, facilitated by the proposed base model, we introduce collaborating relation features shared among relations in the hierarchies to promote the relation-augmenting process and balance the training data for long-tail relations.
1 code implementation • EMNLP 2020 • Shuohang Wang, Yuwei Fang, Siqi Sun, Zhe Gan, Yu Cheng, Jing Jiang, Jingjing Liu
In this paper, we propose Cross-Thought, a novel approach to pre-training sequence encoder, which is instrumental in building reusable sequence embeddings for large-scale NLP tasks such as question answering.
no code implementations • 6 Oct 2020 • Sicheng Yu, Hao Zhang, Wei Jing, Jing Jiang
In addition to the effective reduction of human efforts of our approach compared, through extensive experiments on OpenbookQA, we show that the proposed approach outperforms the models that use the same backbone and more training data; and our parameter analysis also demonstrates the interpretability of our approach.
1 code implementation • 24 Sep 2020 • Xueping Peng, Guodong Long, Tao Shen, Sen Wang, Jing Jiang, Chengqi Zhang
Electronic health records (EHRs) are longitudinal records of a patient's interactions with healthcare systems.
no code implementations • 24 Sep 2020 • Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
To address this challenging task, most ZSL methods relate unseen test classes to seen(training) classes via a pre-defined set of attributes that can describe all classes in the same semantic space, so the knowledge learned on the training classes can be adapted to unseen classes.
no code implementations • ACL 2020 • Yunshi Lan, Jing Jiang
Previous work on answering complex questions from knowledge bases usually separately addresses two types of complexity: questions with constraints and questions with multiple hops of relations.
no code implementations • ACL 2020 • Jianfei Yu, Jing Jiang, Li Yang, Rui Xia
To tackle the first issue, we propose a multimodal interaction module to obtain both image-aware word representations and word-aware visual representations.
Multi-modal Named Entity Recognition
named-entity-recognition
+1
1 code implementation • 28 Jun 2020 • Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
We study many-class few-shot (MCFS) problem in both supervised learning and meta-learning settings.
1 code implementation • ICLR 2021 • Lu Liu, William Hamilton, Guodong Long, Jing Jiang, Hugo Larochelle
We consider the problem of multi-domain few-shot image classification, where unseen classes and examples come from diverse data sources.
Ranked #1 on
Few-Shot Image Classification
on Meta-Dataset Rank
1 code implementation • 15 Jun 2020 • Xueping Peng, Guodong Long, Tao Shen, Sen Wang, Jing Jiang
The key challenge of patient journey understanding is to design an effective encoding mechanism which can properly tackle the aforementioned multi-level structured patient journey data with temporal sequential visits and a set of medical codes.
2 code implementations • 24 May 2020 • Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, Xiaojun Chang, Chengqi Zhang
Modeling multivariate time series has long been a subject that has attracted researchers from a diverse range of fields including economics, finance, and traffic.
Ranked #2 on
Univariate Time Series Forecasting
on Electricity
no code implementations • 22 May 2020 • Wenjie Huang, Jing Jiang, Xiao Liu
In this paper, novel gradient-based online learning algorithms are developed to investigate an important environmental application: real-time river pollution source identification, which aims at estimating the released mass, location, and time of a river pollution source based on downstream sensor data monitoring the pollution concentration.
3 code implementations • 3 May 2020 • Guodong Long, Ming Xie, Tao Shen, Tianyi Zhou, Xianzhi Wang, Jing Jiang, Chengqi Zhang
However, due to the diverse nature of user behaviors, assigning users' gradients to different global models (i. e., centers) can better capture the heterogeneity of data distributions across users.
no code implementations • 13 Apr 2020 • Yufei Tian, Jianfei Yu, Jing Jiang
In this paper, we study abstractive review summarization. Observing that review summaries often consist of aspect words, opinion words and context words, we propose a two-stage reinforcement learning approach, which first predicts the output word type from the three types, and then leverages the predicted word type to generate the final word distribution. Experimental results on two Amazon product review datasets demonstrate that our method can consistently outperform several strong baseline approaches based on ROUGE scores.
3 code implementations • ICLR 2022 • Wensi Tang, Guodong Long, Lu Liu, Tianyi Zhou, Michael Blumenstein, Jing Jiang
Particularly, it is a set of kernel sizes that can efficiently cover the best RF size across different datasets via consisting of multiple prime numbers according to the length of the time series.
1 code implementation • 29 Jan 2020 • Ling Min Serena Khoo, Hai Leong Chieu, Zhong Qian, Jing Jiang
We propose a post-level attention model (PLAN) to model long distance interactions between tweets with the multi-head attention mechanism in a transformer network.
no code implementations • 20 Jan 2020 • Shuohang Wang, Yunshi Lan, Yi Tay, Jing Jiang, Jingjing Liu
Transformer has been successfully applied to many natural language processing tasks.
no code implementations • 16 Jan 2020 • Jing Jiang
In this work, we propose the architecture to use images instead of text as the input of the text generation model, called StoryGen.
no code implementations • 27 Nov 2019 • Yang Li, Guodong Long, Tao Shen, Tianyi Zhou, Lina Yao, Huan Huo, Jing Jiang
Distantly supervised relation extraction intrinsically suffers from noisy labels due to the strong assumption of distant supervision.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Yaoyiran Li, Jing Jiang
This paper presents some preliminary investigations of a new co-attention mechanism in neural transduction models.
no code implementations • 28 Oct 2019 • Chenglei Si, Shuohang Wang, Min-Yen Kan, Jing Jiang
Based on our experiments on the 5 key MCRC datasets - RACE, MCTest, MCScript, MCScript2. 0, DREAM - we observe that 1) fine-tuned BERT mainly learns how keywords lead to correct prediction, instead of learning semantic understanding and reasoning; and 2) BERT does not need correct syntactic information to solve the task; 3) there exists artifacts in these datasets such that they can be solved even without the full context.
1 code implementation • 15 Sep 2019 • Xueping Peng, Guodong Long, Tao Shen, Sen Wang, Jing Jiang, Michael Blumenstein
In this paper, we propose a medical concept embedding method based on applying a self-attention mechanism to represent each medical concept.
1 code implementation • NeurIPS 2019 • Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
It can significantly improve tasks that suffer from insufficient training data, e. g., few shot learning.
no code implementations • 6 Sep 2019 • Tao Shen, Xiubo Geng, Tao Qin, Guodong Long, Jing Jiang, Daxin Jiang
These two problems lead to a poorly-trained semantic parsing model.
2 code implementations • 15 Jun 2019 • Chun Wang, Shirui Pan, Ruiqi Hu, Guodong Long, Jing Jiang, Chengqi Zhang
Graph clustering is a fundamental task which discovers communities or groups in networks.
Ranked #8 on
Node Clustering
on Cora
7 code implementations • 31 May 2019 • Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, Chengqi Zhang
Spatial-temporal graph modeling is an important task to analyze the spatial relations and temporal trends of components in a system.
Ranked #8 on
Traffic Prediction
on PEMS-BAY
2 code implementations • 10 May 2019 • Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Lina Yao, Chengqi Zhang
The resulting graph of prototypes can be continually re-used and updated for new tasks and classes.
no code implementations • ICLR 2019 • Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
It addresses the ``many-class'' problem by exploring the class hierarchy, e. g., the coarse-class label that covers a subset of fine classes, which helps to narrow down the candidates for the fine class and is cheaper to obtain.
1 code implementation • 4 Apr 2019 • Fengwen Chen, Shirui Pan, Jing Jiang, Huan Huo, Guodong Long
In this paper, we propose a novel framework called, dual attention graph convolutional networks (DAGCN) to address these problems.
Ranked #24 on
Graph Classification
on NCI1
no code implementations • NAACL 2019 • Shuohang Wang, Sheng Zhang, Yelong Shen, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Jing Jiang
Commonsense reasoning is fundamental to natural language understanding.
Ranked #3 on
Natural Language Understanding
on PDP60
no code implementations • 4 Jan 2019 • Shirui Pan, Ruiqi Hu, Sai-fu Fung, Guodong Long, Jing Jiang, Chengqi Zhang
Based on this framework, we derive two variants of adversarial models, the adversarially regularized graph autoencoder (ARGA) and its variational version, adversarially regularized variational graph autoencoder (ARVGA), to learn the graph embedding effectively.
Ranked #7 on
Node Clustering
on Cora
4 code implementations • 17 Dec 2018 • Shaoxiong Ji, Shirui Pan, Guodong Long, Xue Li, Jing Jiang, Zi Huang
Federated learning (FL) provides a promising approach to learning private language modeling for intelligent personalized keyboard suggestion by training models in distributed clients rather than training in a central server.
no code implementations • IEEE 2018 • Jianfei Yu, Jing Jiang, Rui Xia
However, most existing methods fail to explicitly consider the syntactic relations among aspect terms and opinion terms, which may lead to the inconsistencies between the model predictions and the syntactic constraints.
Aspect Term Extraction and Sentiment Classification
Multi-Task Learning
+2
no code implementations • EMNLP 2018 • Jianfei Yu, Lu{\'\i}s Marujo, Jing Jiang, Pradeep Karuturi, William Brendel
In this paper, we target at improving the performance of multi-label emotion classification with the help of sentiment classification.
no code implementations • 3 Aug 2018 • Debanjan Mahata, Jasper Friedrichs, Rajiv Ratn Shah, Jing Jiang
We believe that the developed classifier has direct uses in the areas of psychology, health informatics, pharmacovigilance and affective computing for tracking moods, emotions and sentiments of patients expressing intake of medicine in social media.
no code implementations • COLING 2018 • Yunshi Lan, Jing Jiang
In this paper, we study how we can improve a deep learning approach to textual entailment by incorporating lexical entailment relations from WordNet.
1 code implementation • ACL 2018 • Shuohang Wang, Mo Yu, Shiyu Chang, Jing Jiang
Multi-choice reading comprehension is a challenging task, which involves the matching between a passage and a question-answer pair.
2 code implementations • NAACL 2019 • Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
Neural networks equipped with self-attention have parallelizable computation, light-weight structure, and the ability to capture both long-range and local dependencies.
1 code implementation • ICLR 2018 • Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
In this paper, we propose a model, called "bi-directional block self-attention network (Bi-BloSAN)", for RNN/CNN-free sequence encoding.
4 code implementations • 13 Feb 2018 • Shirui Pan, Ruiqi Hu, Guodong Long, Jing Jiang, Lina Yao, Chengqi Zhang
Graph embedding is an effective method to represent graph data in a low dimensional space for graph analytics.
Ranked #4 on
Link Prediction
on Pubmed
1 code implementation • 31 Jan 2018 • Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Sen Wang, Chengqi Zhang
In this paper, we integrate both soft and hard attention into one context fusion model, "reinforced self-attention (ReSA)", for the mutual benefit of each other.
Ranked #56 on
Natural Language Inference
on SNLI
1 code implementation • 23 Nov 2017 • Jianfei Yu, Minghui Qiu, Jing Jiang, Jun Huang, Shuangyong Song, Wei Chu, Haiqing Chen
In this paper, we study transfer learning for the PI and NLI problems, aiming to propose a general framework, which can effectively and efficiently adapt the shared knowledge learned from a resource-rich source domain to a resource- poor target domain.
1 code implementation • ICLR 2018 • Shuohang Wang, Mo Yu, Jing Jiang, Wei zhang, Xiaoxiao Guo, Shiyu Chang, Zhiguo Wang, Tim Klinger, Gerald Tesauro, Murray Campbell
We propose two methods, namely, strength-based re-ranking and coverage-based re-ranking, to make use of the aggregated evidence from different passages to better determine the answer.
Ranked #1 on
Open-Domain Question Answering
on Quasar
no code implementations • CIKM '17 Proceedings of the 2017 ACM on Conference on Information and Knowledge Management 2017 • Chun Wang, Shirui Pan, Guodong Long, Xingquan Zhu, Jing Jiang
In this paper, we propose a novel marginalized graph autoencoder (MGAE) algorithm for graph clustering.
no code implementations • IJCNLP 2017 • Jianfei Yu, Jing Jiang
In this paper, we study domain adaptation with a state-of-the-art hierarchical neural network for document-level sentiment classification.
1 code implementation • 14 Sep 2017 • Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, Chengqi Zhang
Recurrent neural nets (RNN) and convolutional neural nets (CNN) are widely used on NLP tasks to capture the long-term and local dependencies, respectively.
Ranked #69 on
Natural Language Inference
on SNLI
1 code implementation • 31 Aug 2017 • Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei zhang, Shiyu Chang, Gerald Tesauro, Bo-Wen Zhou, Jing Jiang
Second, we propose a novel method that jointly trains the Ranker along with an answer-generation Reader model, based on reinforcement learning.
Ranked #4 on
Open-Domain Question Answering
on Quasar
no code implementations • ACL 2017 • Liangguo Wang, Jing Jiang, Hai Leong Chieu, Chen Hui Ong, D. Song, an, Lejian Liao
In this paper, we study how to improve the domain adaptability of a deletion-based Long Short-Term Memory (LSTM) neural network model for sentence compression.
Ranked #6 on
Sentence Compression
on Google Dataset
no code implementations • COLING 2016 • Yang Li, Ting Liu, Jing Jiang, Liang Zhang
Microblogging services allow users to create hashtags to categorize their posts.
no code implementations • COLING 2016 • Jianfei Yu, Jing Jiang
Relation classification is the task of classifying the semantic relations between entity pairs in text.
2 code implementations • 6 Nov 2016 • Shuohang Wang, Jing Jiang
We particularly focus on the different comparison functions we can use to match two vectors.
5 code implementations • 29 Aug 2016 • Shuohang Wang, Jing Jiang
We propose two ways of using Pointer Net for our task.
Ranked #47 on
Question Answering
on SQuAD1.1 dev
4 code implementations • NAACL 2016 • Shuohang Wang, Jing Jiang
On the SNLI corpus, our model achieves an accuracy of 86. 1%, outperforming the state of the art.
Ranked #61 on
Natural Language Inference
on SNLI