1 code implementation • EMNLP 2021 • Sam Brody, Sichao Wu, Adrian Benton
In recent years, few-shot models have been applied successfully to a variety of NLP tasks.
no code implementations • ACL 2022 • Sheena Panthaplackel, Adrian Benton, Mark Dredze
We propose the task of updated headline generation, in which a system generates a headline for an updated article, considering both the previous article and headline.
no code implementations • 14 Apr 2024 • Taehyeon Kim, Ananda Theertha Suresh, Kishore Papineni, Michael Riley, Sanjiv Kumar, Adrian Benton
Despite the remarkable strides made by autoregressive language models, their potential is often hampered by the slow inference speeds inherent in sequential token generation.
no code implementations • 25 Jan 2023 • Adrian Benton, Tianze Shi, Ozan İrsoy, Igor Malioutov
English news headlines form a register with unique syntactic properties that have been documented in linguistics literature since the 1930s.
no code implementations • 23 May 2022 • Moniba Keymanesh, Adrian Benton, Mark Dredze
Previous work shows that pre-trained language models(PLMs) perform remarkably well on this task after fine-tuning on a significant amount of task-specific training data.
no code implementations • EMNLP 2021 • Adrian Benton, Hanyang Li, Igor Malioutov
However, the register of English news headlines, "headlinese", is very different from the register of long-form text, causing POS tagging models to underperform on headlines.
no code implementations • EMNLP (insights) 2021 • Sameer Bansal, Adrian Benton
Nickel and Kiela (2017) present a new method for embedding tree nodes in the Poincare ball, and suggest that these hyperbolic embeddings are far more effective than Euclidean embeddings at embedding nodes in large, hierarchically structured graphs like the WordNet nouns hypernymy tree.
1 code implementation • NAACL 2021 • Tianze Shi, Adrian Benton, Igor Malioutov, Ozan İrsoy
While the predictive performance of modern statistical dependency parsers relies heavily on the availability of expensive expert-annotated treebank data, not all annotations contribute equally to the training of the parsers.
1 code implementation • EMNLP (insights) 2021 • Ozan İrsoy, Adrian Benton, Karl Stratos
Mikolov et al. (2013a) observed that continuous bag-of-words (CBOW) word embeddings tend to underperform Skip-gram (SG) embeddings, and this finding has been reported in subsequent works.
no code implementations • CONLL 2019 • Pallavi Patil, Kriti Myer, Ronak Zala, Arpit Singh, Sheshera Mysore, Andrew McCallum, Adrian Benton, Am Stent, a
The sources of knowledge we use are news text and Freebase, a manually curated knowledge base.
no code implementations • 2 Dec 2018 • Adrian Benton
In this thesis, we begin by showing how user representations can be learned from multiple types of user behavior on social media.
no code implementations • WS 2018 • Adrian Benton, Mark Dredze
Many social media classification tasks analyze the content of a message, but do not consider the context of the message.
1 code implementation • NAACL 2018 • Adrian Benton, Mark Dredze
We present deep Dirichlet Multinomial Regression (dDMR), a generative topic model that simultaneously learns document feature representations and topics.
no code implementations • 10 Dec 2017 • Adrian Benton, Margaret Mitchell, Dirk Hovy
We introduce initial groundwork for estimating suicide risk and mental health in a deep learning framework.
no code implementations • WS 2017 • Adrian Benton, Glen Coppersmith, Mark Dredze
Social media have transformed data-driven research in political science, the social sciences, health, and medicine.
3 code implementations • WS 2019 • Adrian Benton, Huda Khayrallah, Biman Gujral, Dee Ann Reisinger, Sheng Zhang, Raman Arora
We present Deep Generalized Canonical Correlation Analysis (DGCCA) -- a method for learning nonlinear transformations of arbitrarily many views of data, such that the resulting transformations are maximally informative of each other.