1 code implementation • CoNLL (EMNLP) 2021 • David Demeter, Doug Downey
The capabilities of today’s natural language processing systems are typically evaluated using large datasets of curated questions and answers.
no code implementations • 27 Jul 2024 • Dong Shu, Haoran Zhao, Xukun Liu, David Demeter, Mengnan Du, Yongfeng Zhang
Moreover, the subtle distinctions between similar and precedent cases require a deep understanding of legal knowledge.
no code implementations • 18 Jun 2023 • David Demeter, Oshin Agarwal, Simon Ben Igeri, Marko Sterbentz, Neil Molino, John M. Conroy, Ani Nenkova
Academic literature does not give much guidance on how to build the best possible customer-facing summarization system from existing research components.
1 code implementation • 23 Oct 2022 • Victor S. Bursztyn, David Demeter, Doug Downey, Larry Birnbaum
In this work, we present compositional fine-tuning (CFT): an approach based on explicitly decomposing a target task into component tasks, and then fine-tuning smaller LMs on a curriculum of such component tasks.
1 code implementation • ACL 2020 • David Demeter, Gregory Kimmel, Doug Downey
Neural Network Language Models (NNLMs) generate probability distributions by applying a softmax function to a distance metric formed by taking the dot product of a prediction vector with all word vectors in a high-dimensional embedding space.
no code implementations • 11 Dec 2019 • David Demeter, Doug Downey
How can we augment today's neural models with such encodings?