Text classification is a critical research topic with broad applications in natural language processing.
The thriving of deep models and generative models provides approaches to model high dimensional distributions.
To this end, we introduce a cooperative training paradigm, where a language model is cooperatively trained with the generator and we utilize the language model to efficiently shape the data distribution of the generator against mode collapse.
Then given the selected samples, we propose the adaptive multi-step TD, which generalizes TD($\lambda$), but adaptively switch on/off the backups from future returns of different steps.
Recent neural network models have significantly advanced the task of coreference resolution.
Ranked #7 on Coreference Resolution on OntoNotes
Leveraging domain knowledge is an effective strategy for enhancing the quality of inferred low-dimensional representations of documents by topic models.