Sampling and interpolation have been extensively studied, in order to reconstruct or estimate the entire graph signal from the signal values on a subset of vertexes, of which most achievements are about continuous signals.
In order to better understand the reason behind model behaviors (i. e., making predictions), most recent works have exploited generative models to provide complementary explanations.
Pre-trained Language Models (PLMs) have achieved great success on Machine Reading Comprehension (MRC) over the past few years.
Experimental results demonstrate that our adapted margin cosine loss can greatly enhance the baseline models with an absolute performance gain of 15\% on average, strongly verifying the potential of tackling the language prior problem in VQA from the angle of the answer feature space learning.
More specifically, we proposed a reinforced selector to extract useful PRF terms to enhance response candidates and a BERT based response ranker to rank the PRF-enhanced responses.
More concretely, we first introduce a novel graph-based iterative knowledge retrieval module, which iteratively retrieves concepts and entities related to the given question and its choices from multiple knowledge sources.
Pre-sales customer service is of importance to E-commerce platforms as it contributes to optimizing customers' buying process.
More specifically, we take advantage of a decision model to help the dialogue system decide whether to wait or answer.
How to build a high-quality multi-domain dialogue system is a challenging work due to its complicated and entangled dialogue state space among each domain, which seriously limits the quality of dialogue policy, and further affects the generated response.
And the arbitrator decides whether to wait or to make a response to the user directly.
The key idea of the proposed approach is to use a Forward Transformation to transform dense representations to sparse representations.
Information-seeking conversation system aims at satisfying the information needs of users through conversations.
How to incorporate external knowledge into a neural dialogue model is critically important for dialogue systems to behave like real humans.
In this paper, we present a fast and strong neural approach for general purpose text matching applications.
Ranked #3 on Question Answering on WikiQA
Then, we devise a mechanism to identify the relevant information from the noise-prone review snippets and incorporate this information to guide the answer generation.
In view of the huge success of convolution neural networks (CNN) for image classification and object recognition, there have been attempts to generalize the method to general graph-structured data.
Specifically, the data selector "acts" on the source domain data to find a subset for optimization of the TL model, and the performance of the TL model can provide "rewards" in turn to update the selector.
Our approach is extended from a basic monolingual STS framework to a shared multilingual encoder pretrained with translation task to incorporate rich-resource language data.
In the era of big data, focused analysis for diverse topics with a short response time becomes an urgent demand.
Building multi-turn information-seeking conversation systems is an important and challenging research topic.
Dialogue management (DM) decides the next action of a dialogue system according to the current dialogue state, and thus plays a central role in task-oriented dialogue systems.