Knowledge-enhanced methods have bridged the gap between human beings and machines in generating dialogue responses.
However, a dialogue is always aligned to a lot of retrieved fact candidates; as a result, the linearized text is always lengthy and then significantly increases the burden of using PLMs.
Despite achieving remarkable performance, previous knowledge-enhanced works usually only use a single-source homogeneous knowledge base of limited knowledge coverage.
CF module extracts and fuses the multi-scale features of SR images for classification.
At present, backdoor attacks attract attention as they do great harm to deep learning models.
This paper finds that contrastive learning can produce superior sentence embeddings for pre-trained models but is also vulnerable to backdoor attacks.
In our experiments, the performance gain brought by GridMask is stronger than spectrum augmentation in ASCs.
Although Deep Neural Network (DNN) has led to unprecedented progress in various natural language processing (NLP) tasks, research shows that deep models are extremely vulnerable to backdoor attacks.
We train a model that integrates information from the user-item interaction graph and the user-user social graph and train two auxiliary models that only use one of the above graphs respectively.
To address these challenges, we propose a novel vertical federated learning framework named Cascade Vertical Federated Learning (CVFL) to fully utilize all horizontally partitioned labels to train neural networks with privacy-preservation.
Besides comparing neighbor nodes when matching neighborhood, we also try to explore useful information from the connected relations.
Given a query, our approach first retrieves a set of prototype dialogues that are relevant to the query.
We collect and build a large-scale Chinese dataset aligned with the commonsense knowledge for dialogue generation.
In this paper, we perform a systematic investigation of backdoor attack on NLP models, and propose BadNL, a general NLP backdoor attack framework including novel attack methods.
Recently, a few methods take relation paths into consideration but pay less attention to the order of relations in paths which is important for reasoning.
Ranked #3 on Link Prediction on FB15k (MR metric)
Recent years have witnessed a surge of interest on response generation for neural conversation systems.
The practice in an elderly-care company shows that the FPQM can reduce the number of attributes by 90. 56% with a prediction accuracy of 98. 39%.