Existing knowledge graph embedding approaches concentrate on modeling symmetry/asymmetry, inversion, and composition typed relations but overlook the hierarchical nature of relations.
People frequently interact with information retrieval (IR) systems, however, IR models exhibit biases and discrimination towards various demographics.
Our model is able to explain a given hypothesis by systematically generating a step-by-step explanation from textual premises.
Empirical analyses show that, despite the challenging nature of generative tasks, we were able to achieve a 16. 5x model footprint compression ratio with little performance drop relative to the full-precision counterparts on multiple summarization and QA datasets.
Recently, prompt-based learning for pre-trained language models has succeeded in few-shot Named Entity Recognition (NER) by exploiting prompts as task guidance to increase label efficiency.
This new paradigm has revolutionized the entire field of natural language processing, and set the new state-of-the-art performance for a wide variety of NLP tasks.
We evaluate PTLM's ability to adapt to new corpora while retaining learned knowledge in earlier corpora.
Unsupervised clustering aims at discovering the semantic categories of data according to some distance measured in the representation space.
Ranked #1 on Short Text Clustering on Stackoverflow
Recently, Nogueira et al.  proposed a new approach to document expansion based on a neural Seq2Seq model, showing significant improvement on short text retrieval task.