The potential of using a large language model (LLM) as a knowledge base (KB) has sparked significant interest.
We investigate the unsupervised constituency parsing task, which organizes words and phrases of a sentence into a hierarchical structure without using linguistically annotated data.
Controllable text generation (CTG) by large language models has a huge potential to transform education for teachers and students alike.
State-of-the-art language generation models can degenerate when applied to open-ended generation problems such as text completion, story generation, or dialog modeling.
We train a neural model with this feedback data that can generate explanations and re-score answer candidates.
Ranked #1 on Overall - Test on FeedbackQA
We test our method in a dialogue-based ITS and demonstrate that our approach results in high-quality feedback and significantly improved student learning gains.
We propose an unsupervised graph-based ranking model for extractive summarization of long scientific documents.
Ranked #1 on Unsupervised Extractive Summarization on Pubmed
In this paper, we propose LexSub, a novel approach towards unifying lexical and distributional semantics.
Existing approaches to automatic summarization assume that a length limit for the summary is given, and view content selection as an optimization problem to maximize informativeness and minimize redundancy within this budget.
When performing a conceptual analysis of a concept, philosophers are interested in all forms of expression of a concept in a text---be it direct or indirect, explicit or implicit.