Recently, the performance of Pre-trained Language Models (PLMs) has been significantly improved by injecting knowledge facts to enhance their abilities of language understanding.
We argue that training a teacher with transferable knowledge digested across domains can achieve better generalization capability to help knowledge distillation.
The literature has witnessed the success of leveraging Pre-trained Language Models (PLMs) and Transfer Learning (TL) algorithms to a wide range of Natural Language Processing (NLP) applications, yet it is not easy to build an easy-to-use and scalable TL toolkit for this purpose.
The intensive computation of Automatic Speech Recognition (ASR) models obstructs them from being deployed on mobile devices.
We present EasyASR, a distributed machine learning platform for training and serving large-scale Automatic Speech Recognition (ASR) models, as well as collecting and processing audio data at scale.
In this paper, we introduce a multi-target MRC task for the medical domain, whose goal is to predict answers to medical questions and the corresponding support sentences from medical information sources simultaneously, in order to ensure the high reliability of medical knowledge serving.
Building Automatic Speech Recognition (ASR) systems from scratch is significantly challenging, mostly due to the time-consuming and financially-expensive process of annotating a large amount of audio data with transcripts.
In this paper, we propose an effective learning procedure named Meta Fine-Tuning (MFT), served as a meta-learner to solve a group of similar NLP tasks for neural language models.
We further combine a meta-learning process over the auxiliary task distribution and supervised learning to train the neural lexical relation classifier.
Given the nonlinear kinetics of tacrolimus and large variability, population pharmacokinetic model should be combined with therapeutic drug monitoring to optimize individualized therapy.
This paper proposes the novel task of video generation conditioned on a SINGLE semantic label map, which provides a good balance between flexibility and quality in the generation process.
The goal is to learn a classifier on pre-defined relations and discover new relations expressed in texts.
User generated categories (UGCs) are short texts that reflect how people describe and organize entities, expressing rich semantic relations implicitly.
Finding the correct hypernyms for entities is essential for taxonomy learning, fine-grained entity categorization, query understanding, etc.